We're removing seat fees and making pricing better for fast-growing teams

Learn more

Hermes on Render

Deploy Hermes Agent on Render with persistent storage for skills and sessions. Self-improving AI agent with browser-based dashboard setup.

Why deploy hermes render on Render?

Hermes Agent is a self-improving AI agent framework from Nous Research that can learn new skills, maintain persistent memory, and connect to chat platforms like Telegram, Discord, and Slack. This Render template deploys Hermes as a Docker web service with persistent disk storage, solving the problem of stateful agent deployment by preserving skills, sessions, and memories across restarts and upgrades.

This template deploys Hermes Agent as a single Docker service with a 5GB persistent disk pre-configured for skills, sessions, memories, and config files—state that survives redeployments without manual volume mounting or backup scripts. Instead of wiring up the dashboard, gateway process, and persistent storage yourself, you get a working Hermes instance with one click that pins a specific release for reproducible builds. Render's persistent disks mean your agent's learned behaviors and API keys persist across deploys, and the Standard plan gives you enough resources to run both the dashboard and gateway in a single container.

Architecture

What you can build

After deploying, you'll have a Hermes AI agent running on Render with a browser-based dashboard where you can configure API keys, chat with the agent, and connect it to Telegram, Discord, or Slack. The agent stores its sessions, learned skills, and memories on a persistent disk, so your configuration and conversation history survive redeployments. You'll need to add at least one LLM provider key through the dashboard before the agent can respond.

Key features

  • Persistent disk storage: Stores skills, sessions, memories, API keys, and config on a 5GB persistent disk that survives redeploys and upgrades.
  • Web dashboard with TUI: Exposes a browser-based dashboard with full terminal UI over xterm.js for configuration, chat, and gateway management.
  • Multi-platform chat gateway: Connects to Telegram, Discord, and Slack via long-poll connections with configurable bot tokens per platform.
  • OpenAI-compatible API server: Optional API server exposes /v1/chat/completions endpoint with bearer token authentication for external HTTP clients.
  • Pinned release deploys: Uses a specific Hermes release tag for reproducible deployments rather than floating latest tags.

Use cases

  • Solo developer deploys a personal AI assistant accessible via Telegram
  • Startup founder runs a Discord support bot without managing infrastructure
  • Researcher hosts a persistent agent that learns and remembers across sessions
  • Team lead sets up a Slack bot for internal task automation

What's included

Service
Type
Purpose
hermes-data
Web Service
Application service

Next steps

  1. Open the Hermes dashboard URL and navigate to the API Keys tab — You should see empty fields for OPENROUTER_API_KEY, ANTHROPIC_API_KEY, and other provider keys ready to configure
  2. Configure at least one LLM provider key in the API Keys tab, then check the Status tab — You should see the gateway status change to 'running' and the model field display your configured model as reachable
  3. Test the agent by opening the Chat tab and sending a simple message like 'Hello, what can you do?' — You should see a streaming response from the agent within a few seconds, confirming the LLM connection works

Resources

Stack

docker

Tags

ai-agent
ai

For AI agents

Drop into your coding agent to explore and deploy this template.