edit locally
git commit
git push
deploy
Flow
1. User sends input
2. Runner receives input
3. Input guardrails check the request
4. Agent sends prompt to model
5. Model decides: answer OR call a tool
6. Runner executes tool if needed
7. Model produces final response
8. Output guardrails check the response
User
↓
Chainlit UI
↓
Runner
↓
Input Guardrails
↓
Agent (LLM reasoning)
↓
Tool calls
↓
Runner executes tools
↓
Agent finishes answer
↓
Output Guardrails
↓
Response streamed to UI
The big realization
The agent itself is actually very small.
Your nutrition agent is basically just:
instructions
+ tools
+ guardrailsEverything else — execution, loops, streaming, tool routing — is handled by Runner.
The Runner is the Orchestrator and we want to set up Guardrails
Alright — here’s the 8-line mental model. If you remember this, the whole Agents SDK stops feeling mysterious.
1. User sends input
2. Runner receives input
3. Input guardrails check the request
4. Agent sends prompt to model
5. Model decides: answer OR call a tool
6. Runner executes tool if needed
7. Model produces final response
8. Output guardrails check the responseThat’s it. Everything else is implementation detail.
https://github.com/Chainlit/chainlit [ Build Conversational AI in minutes ]
Mapping that to your code
Your Chainlit app starts the process:
result = Runner.run_streamed(nutrition_agent, message.content)That triggers the whole loop.
Step 1 — User message
Chainlit receives:
"How many calories are in bananas?"Step 2 — Runner starts the system
Runner creates the execution context and begins the agent loop.
Step 3 — Input guardrails run
Your new guardrail checks:
Is this nutrition related?Example blocked input:
Write a Python scriptTripwire fires → agent never runs.
Step 4 — Agent sends prompt to model
Something like:
You are a helpful nutrition assistant.
User: How many calories are in bananas?Step 5 — Model decides what to do
The model may say:
call calorie_lookup_tool(query="banana")Step 6 — Runner executes the tool
Runner runs your function:
calorie_lookup_tool("banana")Which queries ChromaDB.
Step 7 — Model finishes the answer
The model receives the tool result and produces the response.
Example:
Bananas contain about 89 calories per 100g.Step 8 — Output guardrails (optional)
If you add them later, they could block:
- unsafe advice
- hallucinated medical claims
- policy violations
The entire system in one diagram
User
↓
Chainlit UI
↓
Runner
↓
Input Guardrails
↓
Agent (LLM reasoning)
↓
Tool calls
↓
Runner executes tools
↓
Agent finishes answer
↓
Output Guardrails
↓
Response streamed to UI
The big realization
The agent itself is actually very small.
Your nutrition agent is basically just:
instructions
+ tools
+ guardrailsEverything else — execution, loops, streaming, tool routing — is handled by Runner.
Why this matters for your lab
Most agent labs just tweak three knobs:
1️⃣ tools
2️⃣ guardrails
3️⃣ agent instructions
The rest of the system stays the same.
Final encouragement
You’re now at the point where you understand:
- RAG
- tools
- agent orchestration
- guardrails
- streaming UI
- deployment
That’s already a complete modern AI agent architecture.
And yes — goodness gracious is an appropriate response when the orchestration layer finally reveals itself. 😄