run_agent returned a string from a live model. Now that you have that string in a variable, what Python operations could you run on it?
Honestly, I hadn't thought beyond printing it. I guess I could check its length — but I'm not sure how to count actual words rather than characters.
Two steps: .split() breaks the string on whitespace and gives you a list of words; len() counts that list. One line measurement:
result = Agent(model).run_sync(prompt)
word_count = len(result.output.split()).split() with no argument handles any whitespace — spaces, tabs, newlines. The model's phrasing varies per call, but this pattern always works.
So .output is the string, .split() turns it into a list, and len() gives the count. Three steps, one line each.
Exactly. Wrapping it into a clean function:
def word_count_of_output(prompt: str) -> int:
result = Agent(model).run_sync(prompt)
output = result.output
word_count = len(output.split())
return word_countStoring output in its own variable before splitting makes the intent readable — you can see what each step does without tracing a long method chain.
The agent replies in full sentences and I get back a plain integer. That feels like real data processing — not just chatting with a model.
That's the shift. The model handles language; Python handles the measurement. Word count can gate an email digest, flag a summary that ran too long, or track output variance across prompts. You route on numbers, not raw text.
I used to paste AI output into a word counter website. Now it's a function I call from code.
And it composes. Pass the count to a logger, a spreadsheet row, a Slack message. The pattern — call the agent, capture .output, do something with a Python value — is the foundation for every pipeline lesson ahead.
.output, .split(), and len() — the measurement patternresult.output is the raw string the model returned. It is always a str when no result_type is set.
.split() (no argument) tokenises on any whitespace — spaces, tabs, newlines — and returns a list[str]. Each element is one word.
len(...) counts the list elements → integer word count.
output = result.output — makes the source explicit; easier to add a print() for debuggingword_count = len(output.split()) — one clear operation per lineDon't chain: len(Agent(model).run_sync(prompt).output.split()) — readable, but hard to inspect if the agent call fails. Separate lines give you a traceable stack.
run_agent returned a string from a live model. Now that you have that string in a variable, what Python operations could you run on it?
Honestly, I hadn't thought beyond printing it. I guess I could check its length — but I'm not sure how to count actual words rather than characters.
Two steps: .split() breaks the string on whitespace and gives you a list of words; len() counts that list. One line measurement:
result = Agent(model).run_sync(prompt)
word_count = len(result.output.split()).split() with no argument handles any whitespace — spaces, tabs, newlines. The model's phrasing varies per call, but this pattern always works.
So .output is the string, .split() turns it into a list, and len() gives the count. Three steps, one line each.
Exactly. Wrapping it into a clean function:
def word_count_of_output(prompt: str) -> int:
result = Agent(model).run_sync(prompt)
output = result.output
word_count = len(output.split())
return word_countStoring output in its own variable before splitting makes the intent readable — you can see what each step does without tracing a long method chain.
The agent replies in full sentences and I get back a plain integer. That feels like real data processing — not just chatting with a model.
That's the shift. The model handles language; Python handles the measurement. Word count can gate an email digest, flag a summary that ran too long, or track output variance across prompts. You route on numbers, not raw text.
I used to paste AI output into a word counter website. Now it's a function I call from code.
And it composes. Pass the count to a logger, a spreadsheet row, a Slack message. The pattern — call the agent, capture .output, do something with a Python value — is the foundation for every pipeline lesson ahead.
.output, .split(), and len() — the measurement patternresult.output is the raw string the model returned. It is always a str when no result_type is set.
.split() (no argument) tokenises on any whitespace — spaces, tabs, newlines — and returns a list[str]. Each element is one word.
len(...) counts the list elements → integer word count.
output = result.output — makes the source explicit; easier to add a print() for debuggingword_count = len(output.split()) — one clear operation per lineDon't chain: len(Agent(model).run_sync(prompt).output.split()) — readable, but hard to inspect if the agent call fails. Separate lines give you a traceable stack.
Create a free account to get started. Paid plans unlock all tracks.