word_count_of_output from yesterday measures the agent's verbosity. Your literature notes need a two-sentence summary for each abstract. Right now the model can return any length. How do you constrain it?
I'd add a system prompt — like a coding manual that tells the research assistant exactly what format to return. Agent(model, system_prompt="Summarize in 2 sentences.") should work.
Exactly. The system prompt is a persistent instruction that shapes every response from that agent. The user prompt is the specific abstract; the system prompt is the extraction format:
agent = Agent(model, system_prompt="Summarize in 2 sentences.")
return agent.run_sync(text).outputWhat if the model still returns three sentences? Does the system prompt guarantee two?
The system prompt is a strong instruction, not a hard constraint. On simple prompts with compliant models it works reliably. For a hard guarantee, use result_type=Literal[...] (Week 2). For now, the system prompt handles the common case — and it's the building block for every summary pattern this week.
Two hundred abstracts, two-sentence notes each, no manual writing. That's the entire first pass of a lit review note set.
The model summarises. You verify the edge cases and add your interpretation. That's the division of labour for Week 1.
And I can tune the prompt for different note formats — three bullet points, one-sentence annotation, methods-only summary. The pattern is the same.
Each variant is one string change:
bullet_agent = Agent(model, system_prompt="Summarize as three bullet points.")
methods_agent = Agent(model, system_prompt="Extract the methods section only.")The system prompt is your coding manual. Changing it changes the output format without changing the code.
agent = Agent(model, system_prompt="Summarize in 2 sentences.")
return agent.run_sync(text).output| Scope | Changes per call? | |
|---|---|---|
system_prompt | All responses from this agent | No — set at construction |
User prompt (.run_sync(text)) | This specific call | Yes — per abstract |
The system prompt is the coding manual; the user prompt is the document to code.
Run summarize_text on the densest, most jargon-heavy abstract in your corpus. If the two-sentence summary is accurate and concise, the system prompt is calibrated. If not, try "Summarize the main finding and method in exactly 2 sentences." for tighter control.
word_count_of_output from yesterday measures the agent's verbosity. Your literature notes need a two-sentence summary for each abstract. Right now the model can return any length. How do you constrain it?
I'd add a system prompt — like a coding manual that tells the research assistant exactly what format to return. Agent(model, system_prompt="Summarize in 2 sentences.") should work.
Exactly. The system prompt is a persistent instruction that shapes every response from that agent. The user prompt is the specific abstract; the system prompt is the extraction format:
agent = Agent(model, system_prompt="Summarize in 2 sentences.")
return agent.run_sync(text).outputWhat if the model still returns three sentences? Does the system prompt guarantee two?
The system prompt is a strong instruction, not a hard constraint. On simple prompts with compliant models it works reliably. For a hard guarantee, use result_type=Literal[...] (Week 2). For now, the system prompt handles the common case — and it's the building block for every summary pattern this week.
Two hundred abstracts, two-sentence notes each, no manual writing. That's the entire first pass of a lit review note set.
The model summarises. You verify the edge cases and add your interpretation. That's the division of labour for Week 1.
And I can tune the prompt for different note formats — three bullet points, one-sentence annotation, methods-only summary. The pattern is the same.
Each variant is one string change:
bullet_agent = Agent(model, system_prompt="Summarize as three bullet points.")
methods_agent = Agent(model, system_prompt="Extract the methods section only.")The system prompt is your coding manual. Changing it changes the output format without changing the code.
agent = Agent(model, system_prompt="Summarize in 2 sentences.")
return agent.run_sync(text).output| Scope | Changes per call? | |
|---|---|---|
system_prompt | All responses from this agent | No — set at construction |
User prompt (.run_sync(text)) | This specific call | Yes — per abstract |
The system prompt is the coding manual; the user prompt is the document to code.
Run summarize_text on the densest, most jargon-heavy abstract in your corpus. If the two-sentence summary is accurate and concise, the system prompt is calibrated. If not, try "Summarize the main finding and method in exactly 2 sentences." for tighter control.
Create a free account to get started. Paid plans unlock all tracks.