Structured Prompts
Build reliable prompts from Pydantic models with template patterns.
I want to add AI to my API. Where do I start?
With structured prompts. The difference between a toy demo and a production AI feature is prompt engineering — and the best prompts are built from data models, not string concatenation.
Here's the problem with ad-hoc prompts:
# Bad: fragile, no validation, hard to test
prompt = "Summarize this: " + user_input
Here's the structured approach:
from pydantic import BaseModel
class SummarizeRequest(BaseModel):
text: str
max_sentences: int = 3
style: str = "concise"
def build_prompt(req: SummarizeRequest) -> str:
return f"""Summarize the following text in {req.max_sentences} sentences.
Style: {req.style}.
Text:
{req.text}
Summary:"""
Now your prompt is validated (Pydantic checks the input), testable (you can assert on the output string), and documented (the model is your API contract).
Why not just use f-strings directly?
Because prompts get complex. You need system messages, few-shot examples, dynamic context. A builder function keeps it manageable:
class ChatRequest(BaseModel):
system_role: str = "helpful assistant"
context: str = ""
user_message: str
examples: list[dict] = []
def build_messages(req: ChatRequest) -> list[dict]:
messages = [{"role": "system", "content": f"You are a {req.system_role}."}]
if req.context:
messages.append({"role": "system", "content": f"Context: {req.context}"})
for ex in req.examples:
messages.append({"role": "user", "content": ex["input"]})
messages.append({"role": "assistant", "content": ex["output"]})
messages.append({"role": "user", "content": req.user_message})
return messages
This builds the messages array that every LLM API expects — system prompt, optional few-shot examples, and the user's actual message.
What's a few-shot example?
It's a sample input-output pair that shows the model what you expect. Like teaching by example:
examples = [
{"input": "The weather is great today", "output": "positive"},
{"input": "I hate waiting in line", "output": "negative"}
]
Include 2-3 examples in your prompt and the model mimics the pattern. It's one of the most reliable techniques for getting consistent output.
So the Pydantic model defines what the user can send, and the builder turns it into what the LLM needs?
Exactly. Pydantic is the contract between your API and your callers. The builder is the translation layer between your API and the LLM. Keeping them separate means you can change the prompt without changing the API, or add new API parameters without rewriting the prompt from scratch.
Practice your skills
Sign up to write and run code in this lesson.
Structured Prompts
Build reliable prompts from Pydantic models with template patterns.
I want to add AI to my API. Where do I start?
With structured prompts. The difference between a toy demo and a production AI feature is prompt engineering — and the best prompts are built from data models, not string concatenation.
Here's the problem with ad-hoc prompts:
# Bad: fragile, no validation, hard to test
prompt = "Summarize this: " + user_input
Here's the structured approach:
from pydantic import BaseModel
class SummarizeRequest(BaseModel):
text: str
max_sentences: int = 3
style: str = "concise"
def build_prompt(req: SummarizeRequest) -> str:
return f"""Summarize the following text in {req.max_sentences} sentences.
Style: {req.style}.
Text:
{req.text}
Summary:"""
Now your prompt is validated (Pydantic checks the input), testable (you can assert on the output string), and documented (the model is your API contract).
Why not just use f-strings directly?
Because prompts get complex. You need system messages, few-shot examples, dynamic context. A builder function keeps it manageable:
class ChatRequest(BaseModel):
system_role: str = "helpful assistant"
context: str = ""
user_message: str
examples: list[dict] = []
def build_messages(req: ChatRequest) -> list[dict]:
messages = [{"role": "system", "content": f"You are a {req.system_role}."}]
if req.context:
messages.append({"role": "system", "content": f"Context: {req.context}"})
for ex in req.examples:
messages.append({"role": "user", "content": ex["input"]})
messages.append({"role": "assistant", "content": ex["output"]})
messages.append({"role": "user", "content": req.user_message})
return messages
This builds the messages array that every LLM API expects — system prompt, optional few-shot examples, and the user's actual message.
What's a few-shot example?
It's a sample input-output pair that shows the model what you expect. Like teaching by example:
examples = [
{"input": "The weather is great today", "output": "positive"},
{"input": "I hate waiting in line", "output": "negative"}
]
Include 2-3 examples in your prompt and the model mimics the pattern. It's one of the most reliable techniques for getting consistent output.
So the Pydantic model defines what the user can send, and the builder turns it into what the LLM needs?
Exactly. Pydantic is the contract between your API and your callers. The builder is the translation layer between your API and the LLM. Keeping them separate means you can change the prompt without changing the API, or add new API parameters without rewriting the prompt from scratch.
Practice your skills
Sign up to write and run code in this lesson.