Yesterday's chain was linear. Today's adds a Python branch. Step 1 classifies the input; depending on the label, you fire one of two different step-2 prompts.
from pydantic_ai import Agent
user_input = "What is 12 times 7?"
# Step 1 — classify
category = Agent(model).run_sync(
f'Classify this question as exactly one word: "math" or "trivia". Reply with only the single word.\n\nQuestion: {user_input}'
).output.strip().strip(".").lower()
# Branch — different prompts for different categories
if category == "math":
answer = Agent(model).run_sync(f"Solve this math problem step by step: {user_input}").output
print(f"FIRED: math branch")
else:
answer = Agent(model).run_sync(f"Answer this trivia question briefly: {user_input}").output
print(f"FIRED: trivia branch")
print(answer)The if is just Python. The model only does the classification and the answering — the routing is mine.
Right. This is the cheap version of multi-tool routing — instead of giving the LLM two tools and letting it pick, you classify first and dispatch with if/elif. More predictable, more debuggable, and you can log which branch fired.
When would I prefer giving the model the choice instead?
When the routing logic is complex or context-dependent — that's week 3 (multi-tool agent). For deterministic two-way splits, classify-then-branch is simpler. If you can write the routing as a Python if, prefer that.
input
↓
classifier (LLM)
↓
if label == X: → prompt A → answer A
elif label == Y: → prompt B → answer B
else: → prompt C
First LLM call gets a label. Python's if/elif routes to a specialised second prompt. Each branch's prompt can be tuned for its category — the math branch can ask for step-by-step solving; the trivia branch can ask for short factual answers.
A prompt that tries to handle math AND trivia AND poetry AND code questions in one go is a worse prompt for any of them. Specialised prompts:
When the routing is deterministic and simple, a Python if is faster and more reliable than letting the LLM decide. The LLM might pick the wrong tool. Your if won't.
Reach for tool-based routing (week 3) when the routing logic is too complex to encode in if/elif cleanly.
Always validate that the label is in your closed set. The model will occasionally output "Math" (capitalised), "math." (with period), or even "This is a math question." (full sentence). Normalise:
category = result.output.strip().strip(".").lower()
if category not in ("math", "trivia"):
# fallback or retry
category = "trivia" # safe defaultNext lesson generalises this validation step.
For debugging, always print or log which branch ran. When something goes wrong, you want to know whether the classifier was wrong or the branch's prompt was wrong.
Yesterday's chain was linear. Today's adds a Python branch. Step 1 classifies the input; depending on the label, you fire one of two different step-2 prompts.
from pydantic_ai import Agent
user_input = "What is 12 times 7?"
# Step 1 — classify
category = Agent(model).run_sync(
f'Classify this question as exactly one word: "math" or "trivia". Reply with only the single word.\n\nQuestion: {user_input}'
).output.strip().strip(".").lower()
# Branch — different prompts for different categories
if category == "math":
answer = Agent(model).run_sync(f"Solve this math problem step by step: {user_input}").output
print(f"FIRED: math branch")
else:
answer = Agent(model).run_sync(f"Answer this trivia question briefly: {user_input}").output
print(f"FIRED: trivia branch")
print(answer)The if is just Python. The model only does the classification and the answering — the routing is mine.
Right. This is the cheap version of multi-tool routing — instead of giving the LLM two tools and letting it pick, you classify first and dispatch with if/elif. More predictable, more debuggable, and you can log which branch fired.
When would I prefer giving the model the choice instead?
When the routing logic is complex or context-dependent — that's week 3 (multi-tool agent). For deterministic two-way splits, classify-then-branch is simpler. If you can write the routing as a Python if, prefer that.
input
↓
classifier (LLM)
↓
if label == X: → prompt A → answer A
elif label == Y: → prompt B → answer B
else: → prompt C
First LLM call gets a label. Python's if/elif routes to a specialised second prompt. Each branch's prompt can be tuned for its category — the math branch can ask for step-by-step solving; the trivia branch can ask for short factual answers.
A prompt that tries to handle math AND trivia AND poetry AND code questions in one go is a worse prompt for any of them. Specialised prompts:
When the routing is deterministic and simple, a Python if is faster and more reliable than letting the LLM decide. The LLM might pick the wrong tool. Your if won't.
Reach for tool-based routing (week 3) when the routing logic is too complex to encode in if/elif cleanly.
Always validate that the label is in your closed set. The model will occasionally output "Math" (capitalised), "math." (with period), or even "This is a math question." (full sentence). Normalise:
category = result.output.strip().strip(".").lower()
if category not in ("math", "trivia"):
# fallback or retry
category = "trivia" # safe defaultNext lesson generalises this validation step.
For debugging, always print or log which branch ran. When something goes wrong, you want to know whether the classifier was wrong or the branch's prompt was wrong.
Create a free account to get started. Paid plans unlock all tracks.