How do I integrate my AI agent with Slack or Discord as a bot?
Integrating AI agents with chat platforms: Slack and Discord bot architecture
Building a conversational AI agent is only half the challenge. You also need to connect it to platforms where users actually communicate, which requires understanding platform-specific integration patterns. Slack primarily uses HTTP webhooks with optional WebSocket modes, while Discord mandates Gateway WebSocket connections for real-time event streams. This guide explains core integration concepts you can apply to any AI agent stack, whether you're deploying a custom LLM application, a framework-based assistant, or a third-party AI service wrapper.
Understanding bot architecture: Slack vs. Discord
Slack bot architecture: Slack offers multiple integration approaches. Simple bots use incoming webhooks—stateless HTTP endpoints that receive event payloads when specific triggers occur. For bidirectional communication, the Events API with Socket Mode establishes a WebSocket connection that Slack pushes events through, eliminating your need for publicly accessible HTTP endpoints. Production Slack bots typically combine both: Socket Mode for receiving events and Web API for sending responses.
Discord bot architecture: Discord requires Gateway WebSocket connections for all event reception. Your bot authenticates via token, establishes a persistent WebSocket to Discord's Gateway API, and maintains heartbeat intervals to keep connections alive. Discord pushes all subscribed events through this single connection using an IDENTIFY-READY-EVENT flow.
Deployment implications: Slack's webhook approach allows stateless, horizontally scalable deployments where multiple instances handle requests independently. Discord's persistent connection requirement creates stateful services where connection affinity matters. For resource planning on platforms like Render, Discord bots need services configured as background workers with health checks that account for WebSocket liveness rather than HTTP responsiveness.
Here's a simplified example demonstrating how a Slack webhook endpoint might handle an incoming message event:
This pattern demonstrates a basic webhook flow that can be adapted for your specific AI agent implementation.
Handling events and routing
Both platforms emit structured event payloads containing event type, metadata, and content. Slack events include type, event (nested event object), team_id, and event_time. Discord Gateway events follow an op (opcode) structure with t (event type) and d (data payload).
Critical event types for AI agents:
- Message events:
message(Slack),MESSAGE_CREATE(Discord) - primary conversation input - Mention events:
app_mention(Slack), parsing<@bot_id>in Discord messages - explicit bot invocation - Reaction events:
reaction_added(Slack),MESSAGE_REACTION_ADD(Discord) - feedback mechanisms - Command events: Slash commands (both platforms) - structured input with defined parameters
AI agents often require seconds to generate responses. Synchronous handling blocks event loops and risks platform timeouts. The pattern: immediately acknowledge receipt with HTTP 200, queue the AI processing task to a background worker, then post responses via platform APIs separately.
This minimal router pattern illustrates the concept. You'll need to add error handling and specific business logic for your bot.
Managing conversation context
Your AI agent benefits from conversation history to maintain coherent multi-turn dialogues. Your bot must actively retrieve, store, and manage context.
Context storage strategies:
- In-memory storage: Fast access but lost on service restarts. Suitable for simple interactions or when using persistent disks.
- Database persistence: PostgreSQL, MongoDB, or Redis store conversation history. Enables cross-session context and recovery after deployments. Render's managed PostgreSQL or managed Key Value stores can safely store your bot's data.
- External AI context services: Vector databases store semantic conversation history enabling retrieval-augmented generation.
You need to handle context window limits by implementing token counting and truncation strategies: sliding window, summarization, or importance-based filtering based on your chosen AI model's capabilities.
Deploying always-on bot services
Unlike stateless APIs, bots require continuously running processes to maintain WebSocket connections (Discord) or respond to webhooks (Slack).
Render deployment pattern:
Configure your services as Web Services (for Slack bots) or Background Workers (for Discord Gateway connections). Key considerations:
- Health checks: Verify your bot service health. For Discord bots, check Gateway connection status and last heartbeat timestamp.
- Auto-deploy: Enable continuous deployment but implement graceful shutdown handlers.
- Environment variables: Store tokens and credentials securely using Render's environment variables feature.
Required bot permissions:
Your Slack bot needs OAuth scopes to function properly. Your Discord bot requires Gateway Intents to receive events from Discord's Gateway API.
Error handling and production reliability
Production bot integrations implement multi-layer error handling: network failures, API rate limits, malformed events, and AI service timeouts.
Rate limiting: Both Slack and Discord enforce rate limits. Implement exponential backoff with jitter for retries and respect Retry-After headers returned by the platforms.
Connection resilience for Discord: Implement reconnection logic that handles connection closures appropriately. Track session_id and sequence numbers for session resumption to minimize missed events.
This demonstrates the reconnection pattern using tenacity's declarative retry decorator—production implementations need additional error handling and logging for other error types.
Adapting these patterns to your AI agent
These integration patterns apply regardless of your AI stack. Build a platform-agnostic message interface that normalizes Slack and Discord events into common structures. Your AI agent processes normalized messages without platform awareness.
Testing strategies: Use Slack's event payload examples and Discord's Gateway event examples. Implement local mock servers that replay captured event sequences. For AI agent testing, stub model calls with fixture responses.
Monitor your production bots: Track event processing latency, AI model inference duration, context retrieval time, error rates by event type, and rate limit encounters. Render's log streams enable real-time observability through integration with your preferred monitoring provider.