We're removing seat fees and making pricing better for fast-growing teams
Learn moreWhy deploy Pydantic Agents on Render?
Pydantic AI is a Python framework for building AI agents and LLM-powered applications using Pydantic for data validation and type safety. It provides structured tools for creating multi-step AI pipelines with built-in support for function calling, dependency injection, and integration with observability platforms like Logfire.
This template wires together a FastAPI backend, frontend, and managed Postgres database with pgvector pre-configured for hybrid semantic search across 10,000+ documentation chunks. All service connections, database URLs, and the multi-stage AI pipeline configuration are ready to go—you just add your API keys and deploy with one click. On Render, you get managed Postgres with automatic backups, seamless service-to-service networking, and the ability to spin up preview environments to test pipeline changes without touching production.
Architecture
What you can build
After deploying, you'll have a working Q&A assistant that answers questions about Render's documentation using an 8-stage AI pipeline with hybrid search, claim verification, and automatic quality checks. The system tracks costs per query and sends traces to Logfire, giving you a concrete example of how to instrument and monitor a multi-model LLM application in production. You can ask questions, watch the pipeline execute in real time, and inspect the observability data to understand patterns you'd apply to your own AI systems.
Key features
- Hybrid RAG search: Combines pgvector semantic embeddings with BM25 full-text search for document retrieval.
- Multi-model pipeline: Orchestrates Claude Sonnet and GPT-4o-mini agents across 8 stages using Pydantic AI with structured typed outputs.
- Dual-rater evaluation: Runs parallel quality assessment with both OpenAI and Anthropic models using asyncio.gather() for answer verification.
- Logfire observability: Auto-instruments LLM calls, database queries, and HTTP requests with per-stage cost attribution and custom metrics.
- Iterative refinement loop: Quality gate automatically regenerates low-scoring answers with feedback until passing threshold.
Use cases
- DevOps engineer builds observable RAG pipeline with full cost tracking per query
- Platform team implements multi-model evaluation using parallel OpenAI and Anthropic agents
- Backend developer creates verified Q&A system with claim extraction and accuracy checks
- SRE instruments multi-stage LLM pipeline with distributed tracing and custom metrics
What's included
Service | Type | Purpose |
|---|---|---|
logfire-render-api | Web Service | Handles API requests and business logic |
logfire-render-frontend | Web Service | Serves the user interface |
unnamed | rewrite | Application service |
logfire-render-db | PostgreSQL | Primary database |
Prerequisites
- OpenAI API Key: API key for OpenAI services used for embeddings, GPT-4 mini processing, and quality evaluation.
- Anthropic API Key: API key for Anthropic's Claude models used for answer generation and technical accuracy verification.
- Logfire Write Token: Token for sending traces, metrics, and logs to Logfire for AI observability and monitoring.
- Logfire Read Token: Token for reading logs and traces from Logfire to display in the application's UI.
Next steps
- Open the frontend URL and ask 'How do I deploy a Node.js app on Render?' — You should see real-time progress through all 8 pipeline stages and receive a detailed answer with sources within 30 seconds
- Configure your Logfire dashboard at logfire.pydantic.dev and run 3-5 test questions — You should see distributed traces showing each pipeline stage, token costs per request, and quality scores from dual AI evaluators
- Test the hybrid search by asking a pricing question like 'How much does a Starter plan cost?' — You should see the answer cite specific Render documentation chunks and display the total cost for that query in the response metrics
Resources
Repository
Stack
Tags
For AI agents
Drop into your coding agent to explore and deploy this template.