run_agent from yesterday returns a string. Your literature note template has a 50-word limit per abstract summary. How do you know if the agent's response fits?
I'd need to count the words in the output. Python's .split() splits on whitespace and len() counts the result — same as counting words in a transcript.
Exactly. Run the agent, call .split() on the output, and len() gives you the word count:
result = Agent(model).run_sync(prompt)
words = result.output.split()
return len(words)Does .split() handle newlines and multiple spaces? Some model outputs have extra whitespace.
str.split() with no argument splits on any whitespace — spaces, tabs, newlines — and ignores leading/trailing blanks. It's the standard word-count pattern in Python because it matches how humans count words. One word per token of visible text.
So the model outputs a summary and I immediately know if it broke the abstract-note budget. That's the kind of automated quality check I run on RA transcripts.
The pattern generalises. Any time you need to audit agent output length — abstract budget, email length, report section word counts — it's the same three lines. Measure first; prompt-tune second.
And I can use the count as a gate: if the summary is over 60 words, run a shorter-summary prompt automatically. The pipeline checks itself.
The gate pattern looks like this:
count = word_count_of_output(prompt)
if count > 60:
shorten_agent = Agent(model, system_prompt="Summarize in one sentence.")
output = shorten_agent.run_sync(prompt).outputToday: measure. That's the next step. Today: measure. The gate logic comes after you have reliable counts to gate on.
result = Agent(model).run_sync(prompt)
words = result.output.split() # any whitespace, any length
return len(words) # int.split() not .split(' ')split() with no argument treats any run of whitespace as a single separator and strips leading/trailing blanks. split(' ') would count empty strings between double spaces. Use the no-argument form for word counting.
Measure before prompt-tuning. Run word_count_of_output on a sample of your abstracts to understand the model's default verbosity. If counts cluster around 40-50 words for a two-sentence system prompt, the model is following instructions. If counts are consistently over 100, tighten the system prompt first.
run_agent from yesterday returns a string. Your literature note template has a 50-word limit per abstract summary. How do you know if the agent's response fits?
I'd need to count the words in the output. Python's .split() splits on whitespace and len() counts the result — same as counting words in a transcript.
Exactly. Run the agent, call .split() on the output, and len() gives you the word count:
result = Agent(model).run_sync(prompt)
words = result.output.split()
return len(words)Does .split() handle newlines and multiple spaces? Some model outputs have extra whitespace.
str.split() with no argument splits on any whitespace — spaces, tabs, newlines — and ignores leading/trailing blanks. It's the standard word-count pattern in Python because it matches how humans count words. One word per token of visible text.
So the model outputs a summary and I immediately know if it broke the abstract-note budget. That's the kind of automated quality check I run on RA transcripts.
The pattern generalises. Any time you need to audit agent output length — abstract budget, email length, report section word counts — it's the same three lines. Measure first; prompt-tune second.
And I can use the count as a gate: if the summary is over 60 words, run a shorter-summary prompt automatically. The pipeline checks itself.
The gate pattern looks like this:
count = word_count_of_output(prompt)
if count > 60:
shorten_agent = Agent(model, system_prompt="Summarize in one sentence.")
output = shorten_agent.run_sync(prompt).outputToday: measure. That's the next step. Today: measure. The gate logic comes after you have reliable counts to gate on.
result = Agent(model).run_sync(prompt)
words = result.output.split() # any whitespace, any length
return len(words) # int.split() not .split(' ')split() with no argument treats any run of whitespace as a single separator and strips leading/trailing blanks. split(' ') would count empty strings between double spaces. Use the no-argument form for word counting.
Measure before prompt-tuning. Run word_count_of_output on a sample of your abstracts to understand the model's default verbosity. If counts cluster around 40-50 words for a two-sentence system prompt, the model is following instructions. If counts are consistently over 100, tighten the system prompt first.
Create a free account to get started. Paid plans unlock all tracks.