Your word_count_of_output told you the model can be verbose. What if you need exactly two sentences — for an investor email subject line or a Slack standup update?
I'd try adding instructions to the prompt itself. But that feels fragile — the model might ignore it.
Putting instructions in the prompt is fragile. system_prompt is a dedicated job description that runs before every user message — the model treats it as a hard constraint, not a suggestion:
agent = Agent(model, system_prompt="Summarize in 2 sentences.")
return agent.run_sync(text).outputWhat's the difference between putting 'summarize in 2 sentences' in the prompt vs in system_prompt?
The system prompt is the role — it defines what the agent is. The user prompt is the task — what you want done right now. Models follow the role more reliably than inline instructions. Here's the full function:
def summarize_text(text: str) -> str:
agent = Agent(model, system_prompt="Summarize in 2 sentences.")
output = agent.run_sync(text).output
print(f"Summary: {output}")
return outputThe system prompt is like the job posting I write for a contractor — it defines the scope before they start work.
Exactly. A good system prompt is tight and specific. 'Summarize in 2 sentences' works because it constrains both form (sentences, not bullets) and length (exactly two). Vague system prompts produce vague output.
I can send customer interview notes through this and get a two-sentence brief before every investor call. That's 15 minutes back.
summarize_text is also a building block — you will pass its output to a classifier in the two-agent chain. One function's output becomes another's input. That's how pipelines form — by composing focused functions rather than writing one monolithic prompt.
system_prompt defines the agent's behaviour for every call. It runs before the user message.
agent = Agent(model, system_prompt="Summarize in 2 sentences.")
result = agent.run_sync(text)
return result.outputsystem_prompt is a dedicated instruction channel. Models follow it more reliably than inline instructions, especially when output format matters (length, style, structure).
Vague system prompts produce vague output. 'Be helpful' changes nothing. 'Summarize in 2 sentences' changes the shape of every response.
Your word_count_of_output told you the model can be verbose. What if you need exactly two sentences — for an investor email subject line or a Slack standup update?
I'd try adding instructions to the prompt itself. But that feels fragile — the model might ignore it.
Putting instructions in the prompt is fragile. system_prompt is a dedicated job description that runs before every user message — the model treats it as a hard constraint, not a suggestion:
agent = Agent(model, system_prompt="Summarize in 2 sentences.")
return agent.run_sync(text).outputWhat's the difference between putting 'summarize in 2 sentences' in the prompt vs in system_prompt?
The system prompt is the role — it defines what the agent is. The user prompt is the task — what you want done right now. Models follow the role more reliably than inline instructions. Here's the full function:
def summarize_text(text: str) -> str:
agent = Agent(model, system_prompt="Summarize in 2 sentences.")
output = agent.run_sync(text).output
print(f"Summary: {output}")
return outputThe system prompt is like the job posting I write for a contractor — it defines the scope before they start work.
Exactly. A good system prompt is tight and specific. 'Summarize in 2 sentences' works because it constrains both form (sentences, not bullets) and length (exactly two). Vague system prompts produce vague output.
I can send customer interview notes through this and get a two-sentence brief before every investor call. That's 15 minutes back.
summarize_text is also a building block — you will pass its output to a classifier in the two-agent chain. One function's output becomes another's input. That's how pipelines form — by composing focused functions rather than writing one monolithic prompt.
system_prompt defines the agent's behaviour for every call. It runs before the user message.
agent = Agent(model, system_prompt="Summarize in 2 sentences.")
result = agent.run_sync(text)
return result.outputsystem_prompt is a dedicated instruction channel. Models follow it more reliably than inline instructions, especially when output format matters (length, style, structure).
Vague system prompts produce vague output. 'Be helpful' changes nothing. 'Summarize in 2 sentences' changes the shape of every response.
Create a free account to get started. Paid plans unlock all tracks.