Two hundred abstracts. Systematic review inclusion criteria. Your current workflow: open each abstract, read it, decide if it fits, note why. What's the bottleneck?
The reading. Not the decision — I know the criteria. It's the forty-five seconds per abstract, multiplied by two hundred, that eats the weekend.
A PydanticAI agent is a function. You pass a prompt — your inclusion criterion — and get a string back. The weekend triage becomes a loop. The bottleneck moves from reading to verification, which is the work only you can do.
So the agent isn't replacing the decision. It's handling the first pass so I review the close calls instead of every abstract cold.
Exactly. Week 1 builds the foundation: running an agent, measuring its output, controlling its format with a system prompt, and chaining two agents together. By the end of the week, you'll have the pieces for abstract triage, classification, and a two-step summarise-then-classify pipeline.
run_agent: the first agent call, returns a stringword_count_of_output: run + measure the response lengthsummarize_text: system prompt for a two-sentence summaryclassify_sentiment: system prompt for a single-word classificationsummarize_then_classify: chain two agents, output of A feeds input of BGoal: a two-step triage pipeline that summarises an abstract and classifies its relevance to your review.
Create a free account to get started. Paid plans unlock all tracks.
Two hundred abstracts. Systematic review inclusion criteria. Your current workflow: open each abstract, read it, decide if it fits, note why. What's the bottleneck?
The reading. Not the decision — I know the criteria. It's the forty-five seconds per abstract, multiplied by two hundred, that eats the weekend.
A PydanticAI agent is a function. You pass a prompt — your inclusion criterion — and get a string back. The weekend triage becomes a loop. The bottleneck moves from reading to verification, which is the work only you can do.
So the agent isn't replacing the decision. It's handling the first pass so I review the close calls instead of every abstract cold.
Exactly. Week 1 builds the foundation: running an agent, measuring its output, controlling its format with a system prompt, and chaining two agents together. By the end of the week, you'll have the pieces for abstract triage, classification, and a two-step summarise-then-classify pipeline.
run_agent: the first agent call, returns a stringword_count_of_output: run + measure the response lengthsummarize_text: system prompt for a two-sentence summaryclassify_sentiment: system prompt for a single-word classificationsummarize_then_classify: chain two agents, output of A feeds input of BGoal: a two-step triage pipeline that summarises an abstract and classifies its relevance to your review.