The model doesn't see your Python function. It sees a schema — a structured description of what the tool is called, what it does, and what arguments it takes. That's all it needs to decide whether to call it.
schema = {
"name": "add",
"description": "Return the sum of two integers.",
"parameters": {
"type": "object",
"properties": {
"a": {"type": "integer"},
"b": {"type": "integer"}
},
"required": ["a", "b"]
}
}That's just JSON Schema with a name and description wrapped around it.
Exactly. Pydantic-AI generates this from your function signature + docstring automatically — but the schema is what travels to the model. Type hints become "type": "integer". Docstrings become the description the model reads.
So if I write a vague docstring, the model picks the wrong tool?
Or fails to call it at all. The docstring IS your prompt to the model — "here's what this tool does, call it when X". Today's exercise: build that schema dict by hand for add(a, b), and assert it serialises as valid JSON with the expected keys. No LLM call — pure Python.
Three parts:
| Field | Purpose |
|---|---|
name | The function the model invokes (e.g., add) |
description | Plain-English explanation of what the tool does |
parameters | JSON Schema describing argument types |
The model sees the schema as text in its prompt and uses it to decide whether/how to call the tool.
{
"type": "object",
"properties": {
"a": {"type": "integer", "description": "first addend"},
"b": {"type": "integer", "description": "second addend"}
},
"required": ["a", "b"]
}type: object — the arguments are a dictproperties — keys + their typesrequired — which keys must be presentPython types map to JSON Schema like this:
| Python | JSON Schema |
|---|---|
int | "integer" |
float | "number" |
str | "string" |
bool | "boolean" |
list[int] | {"type": "array", "items": {"type": "integer"}} |
dict | "object" |
Because when something goes wrong, you debug at the schema level. "Why isn't the model calling my tool?" → check the schema. "Why is the model passing strings instead of ints?" → check the parameter types. The library auto-generates from your signature + docstring; you just need to know what the auto-generation produces.
You won't usually do this in production (the decorator handles it). But once, by hand, fixes the mental model. Construct the dict, json.dumps it (it should serialise without error), assert the keys.
The model doesn't see your Python function. It sees a schema — a structured description of what the tool is called, what it does, and what arguments it takes. That's all it needs to decide whether to call it.
schema = {
"name": "add",
"description": "Return the sum of two integers.",
"parameters": {
"type": "object",
"properties": {
"a": {"type": "integer"},
"b": {"type": "integer"}
},
"required": ["a", "b"]
}
}That's just JSON Schema with a name and description wrapped around it.
Exactly. Pydantic-AI generates this from your function signature + docstring automatically — but the schema is what travels to the model. Type hints become "type": "integer". Docstrings become the description the model reads.
So if I write a vague docstring, the model picks the wrong tool?
Or fails to call it at all. The docstring IS your prompt to the model — "here's what this tool does, call it when X". Today's exercise: build that schema dict by hand for add(a, b), and assert it serialises as valid JSON with the expected keys. No LLM call — pure Python.
Three parts:
| Field | Purpose |
|---|---|
name | The function the model invokes (e.g., add) |
description | Plain-English explanation of what the tool does |
parameters | JSON Schema describing argument types |
The model sees the schema as text in its prompt and uses it to decide whether/how to call the tool.
{
"type": "object",
"properties": {
"a": {"type": "integer", "description": "first addend"},
"b": {"type": "integer", "description": "second addend"}
},
"required": ["a", "b"]
}type: object — the arguments are a dictproperties — keys + their typesrequired — which keys must be presentPython types map to JSON Schema like this:
| Python | JSON Schema |
|---|---|
int | "integer" |
float | "number" |
str | "string" |
bool | "boolean" |
list[int] | {"type": "array", "items": {"type": "integer"}} |
dict | "object" |
Because when something goes wrong, you debug at the schema level. "Why isn't the model calling my tool?" → check the schema. "Why is the model passing strings instead of ints?" → check the parameter types. The library auto-generates from your signature + docstring; you just need to know what the auto-generation produces.
You won't usually do this in production (the decorator handles it). But once, by hand, fixes the mental model. Construct the dict, json.dumps it (it should serialise without error), assert the keys.
Create a free account to get started. Paid plans unlock all tracks.