We're removing seat fees and making pricing better for fast-growing teams

Learn more

Flue

A one-click Render template for Flue: webhook-triggered TypeScript agents with structured outputs, SSE streaming, and a single render.yaml Blueprint.

Why deploy flue on Render?

Flue is a runtime-agnostic TypeScript framework for building headless, programmable AI agents. It provides an agent harness that lets you define webhook-triggered agents with typed schemas and conversation persistence, similar to how you'd work with Claude Code or Codex.

This template pre-configures two production-ready Flue agents (translation and conversational assistant) with the correct build pipeline, health checks, and Hono server bindings—all wired together in a single Blueprint that deploys with one click. Instead of manually setting up the flue build --target node toolchain, configuring dev dependencies for production builds, and wiring up PORT injection, you get a working webhook-triggered agent stack in minutes. Render's Blueprint handles the build/start commands and health check routing automatically, so you can focus on writing agent logic rather than deployment plumbing.

Architecture

What you can build

After deploying, you'll have a single Node.js service running two AI agents accessible via HTTP webhooks: a translation agent that returns structured JSON with translations and confidence scores, and a conversational assistant that maintains session state across requests. Both agents hit Anthropic's API by default and support streaming responses and fire-and-forget execution, so you can integrate them into existing workflows or test them immediately with curl.

Key features

  • Webhook-triggered agents: Agents are exposed as HTTP endpoints at POST /agents// with session ID-based conversation continuity.
  • Valibot schema validation: Agents can return typed, structured results validated against valibot schemas, as shown in the translate agent's translation + confidence output.
  • SSE streaming responses: Pass Accept: text/event-stream header to receive Server-Sent Events with progress updates as the agent executes.
  • Multi-provider model switching: Configure MODEL_ID environment variable to switch between Anthropic, OpenAI, or OpenRouter models without code changes.
  • Single-file Node.js bundle: flue build --target node compiles all agents into one self-contained dist/server.mjs with a built-in Hono HTTP server.

Use cases

  • Backend dev adds a translation microservice to their Render stack
  • Startup deploys a conversational support agent with session memory
  • Solo founder ships a text summarization API without managing infrastructure
  • Team prototypes webhook-triggered AI agents before building custom auth

What's included

Service
Type
Purpose
flue-agents
Web Service
Application service

Prerequisites

  • Anthropic API Key: Your API key for accessing Anthropic's Claude models to power the AI agents.

Next steps

  1. Test the translate agent by sending a curl request to POST https://.onrender.com/agents/translate/demo with JSON body {"text": "Hello world", "language": "French"} — You should receive a JSON response with a translation field containing "Bonjour le monde" and a confidence field
  2. Test the assistant agent with a follow-up conversation by sending two requests to POST https://.onrender.com/agents/assistant/session-1, first with {"message": "What is the capital of Japan?"} then {"message": "And how many people live there?"} — The second response should reference Tokyo from the first message, confirming conversation continuity
  3. Configure a production model by adding MODEL_ID environment variable in your Render service settings (e.g., openai/gpt-4o or openrouter/moonshotai/kimi-k2.6 with the matching API key) — After redeploying, the translate agent should still return valid translations, confirming the new model is active

Resources

Stack

nodejs
typescript

Tags

ai
ai-agent

For AI agents

Drop into your coding agent to explore and deploy this template.