Two prompts. Same model. What do you think the outputs differ on?
p1 = "Tell me about Python"
p2 = "In exactly 2 sentences, tell me what Python is and one common use"The first one I'd expect a wall of text. The second a tight pair of sentences.
Right. The model defaults to whatever pattern fits the question. "Tell me about X" matches blog-introduction-style training data — long, expository, optionally rambling. "In exactly 2 sentences" matches a constrained answer pattern.
Neither prompt is wrong. They produce different shapes of useful. Specificity is how you choose.
So I always want specific?
When you know the shape you want, yes — make it explicit. Length, structure ("as bullets"), tone ("formal"), audience ("for a 10-year-old"). Each constraint narrows what the model produces. Vague is fine when you actually want the model to surprise you.
LLMs match the shape of training. "Tell me about X" produces wikipedia-shaped output. "In bullets, X" produces bullets. "As a haiku, X" produces (vaguely) haiku-shaped output.
| Constraint | Example phrasing |
|---|---|
| Length | "in exactly 2 sentences", "in under 50 words", "in 3 bullet points" |
| Structure | "as a numbered list", "as a markdown table with columns X and Y" |
| Tone | "formal", "casual", "technical", "as if explaining to a child" |
| Audience | "for a senior engineer", "for someone who has never coded" |
| Format | "return JSON with keys X, Y", "plain text only" |
| Negation | "do not include X", "avoid jargon" |
A clean way to feel the difference:
from pydantic_ai import Agent
for p in ["Tell me about Python", "In exactly 2 sentences, tell me what Python is and one common use"]:
result = Agent(model).run_sync(p)
print("---")
print(p)
print(result.output)Run once, read both outputs side by side. The difference makes the point better than any explainer.
First prompts often miss the shape. The right move: read the output, identify what's wrong about it ("too long", "too formal", "wrong format"), edit the prompt to address that one thing, re-run. Day 4 of this track is exactly this loop.
Create a free account to get started. Paid plans unlock all tracks.
Two prompts. Same model. What do you think the outputs differ on?
p1 = "Tell me about Python"
p2 = "In exactly 2 sentences, tell me what Python is and one common use"The first one I'd expect a wall of text. The second a tight pair of sentences.
Right. The model defaults to whatever pattern fits the question. "Tell me about X" matches blog-introduction-style training data — long, expository, optionally rambling. "In exactly 2 sentences" matches a constrained answer pattern.
Neither prompt is wrong. They produce different shapes of useful. Specificity is how you choose.
So I always want specific?
When you know the shape you want, yes — make it explicit. Length, structure ("as bullets"), tone ("formal"), audience ("for a 10-year-old"). Each constraint narrows what the model produces. Vague is fine when you actually want the model to surprise you.
LLMs match the shape of training. "Tell me about X" produces wikipedia-shaped output. "In bullets, X" produces bullets. "As a haiku, X" produces (vaguely) haiku-shaped output.
| Constraint | Example phrasing |
|---|---|
| Length | "in exactly 2 sentences", "in under 50 words", "in 3 bullet points" |
| Structure | "as a numbered list", "as a markdown table with columns X and Y" |
| Tone | "formal", "casual", "technical", "as if explaining to a child" |
| Audience | "for a senior engineer", "for someone who has never coded" |
| Format | "return JSON with keys X, Y", "plain text only" |
| Negation | "do not include X", "avoid jargon" |
A clean way to feel the difference:
from pydantic_ai import Agent
for p in ["Tell me about Python", "In exactly 2 sentences, tell me what Python is and one common use"]:
result = Agent(model).run_sync(p)
print("---")
print(p)
print(result.output)Run once, read both outputs side by side. The difference makes the point better than any explainer.
First prompts often miss the shape. The right move: read the output, identify what's wrong about it ("too long", "too formal", "wrong format"), edit the prompt to address that one thing, re-run. Day 4 of this track is exactly this loop.