Agent orchestration for messaging platforms. Run AI coding agents (Claude Code, Codex, OpenClaw) in sandboxed containers, accessible via Slack, WhatsApp, or API.
No setup required - chat with our hosted agents:
- WhatsApp: Message +44 7512 972810
- Slack: Join our workspace
┌─────────────────┐ ┌─────────────┐ ┌──────────────────┐
│ Slack/WhatsApp │────▶│ Gateway │────▶│ Worker (Agent) │
│ Thread │◀────│ │◀────│ Claude Code │
└─────────────────┘ └──────┬──────┘ └──────────────────┘
│
┌──────▼──────┐
│ Redis │
│ (state) │
└─────────────┘
Key concepts:
- Session = Thread - Each conversation thread gets its own isolated agent container
- Short-lived tokens - Platform/channel-specific tokens shared with workers for secure API access
- Persistent volumes - Container workspaces survive restarts and scale-to-zero events
- Network isolation - Workers run in sandboxed networks with configurable domain allowlists
| Mode | Use Case | Orchestration |
|---|---|---|
| Kubernetes | Production | Helm chart, auto-scaling, PVCs |
| Docker | Development | Docker Compose, local volumes |
| Local | Testing | Child processes, sandbox runtime |
# Create a new bot
npm create termos my-bot
# Configure and start
cd my-bot
cp .env.example .env # Add your tokens
npm run devFull API documentation: termos.dev/api
curl -X POST https://your-gateway/api/v1/agents/{agentId}/sessions \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"prompt": "Create a Python script that...", "model": "claude-sonnet-4-20250514"}'{
"prompt": "Your task...",
"model": "claude-sonnet-4-20250514",
"workingDirectory": "/workspace/project",
"networkConfig": {
"allowedDomains": ["github.com", "api.openai.com"]
},
"mcpConfig": {
"servers": { ... }
}
}- Manages platform connections (Slack Socket Mode, WhatsApp Baileys)
- Routes messages to worker containers
- Handles OAuth flows for MCP servers
- Streams responses back to users
- Isolated containers running AI agents
- Currently supports Claude Code CLI
- Future: Codex, OpenClaw, custom agents
- MCP server support with OAuth proxy
- Redis-backed state persistence
- Thread-to-session mapping
- Automatic cleanup of idle sessions
- Turn counting to prevent infinite loops
- Multi-platform - Slack, WhatsApp, REST API
- Sandboxed execution - Network isolation, domain allowlists
- Persistent workspaces - Git repos, files survive restarts
- MCP OAuth - Authenticate external services via home tab
- Custom workers - Extend base image with your tools
Sandboxing modes:
- Kubernetes/Docker - Each session runs in its own container with isolated filesystem, network, and resource limits. Outbound traffic is restricted by allowlists.
- Local - Workers run as child processes with optional OS-level sandboxing via the Anthropic Sandbox Runtime (controlled by
SANDBOX_ENABLED=true|false|unset).
Network egress and data flow:
- Workers do not have direct internet access. All outbound requests go through the gateway’s HTTP proxy, which enforces domain allowlists.
- The gateway is the only egress point and the only component that talks to external providers.
MCP proxy and sensitive data:
- OAuth flows are handled by the gateway. Provider tokens and client secrets stay on the gateway side.
- Workers receive short-lived, scoped tokens and call MCP servers through the gateway proxy.
- Agents never receive Slack/WhatsApp tokens or other platform secrets.
- Cloud agents, not local - Unlike OpenClaw’s local execution, Termos runs agents on managed cloud workers.
- Your own computer, preserved - Each thread gets a persistent workspace (your tools, repos, and files stay intact).
- Stateful by default - Sessions resume after restarts and scale-to-zero events.
- Optional browser control - Integrate Owletto when you want the agent to drive a browser.
FROM buremba/termos-worker-base:latest
# Add your tools
RUN pip install pandas matplotlib
RUN apt-get update && apt-get install -y postgresql-client
# Add custom instructions
COPY CLAUDE.md /workspace/- Redis (for state and queues)
- Docker or Kubernetes
- Platform tokens (Slack Bot/App tokens, or WhatsApp session)
# Required
QUEUE_URL=redis://localhost:6379
SLACK_BOT_TOKEN=xoxb-...
SLACK_APP_TOKEN=xapp-...
# Optional
DEPLOYMENT_MODE=kubernetes|docker|local
WORKER_ALLOWED_DOMAINS=github.com,api.example.com
PUBLIC_GATEWAY_URL=https://your-domain.comhelm repo add termos https://charts.termos.dev
helm install termos termos/termos -f values.yaml# Clone and install
git clone https://github.com/termos-dev/termos
cd termos && bun install
# Development
make dev # Start gateway
./scripts/test-bot.sh "@me hello" # Test
# Run tests
bun run testNPM:
create-termos- CLI for creating new bots@termosdev/worker- Worker runtime@termosdev/gateway- Gateway server@termosdev/core- Shared utilities
Docker Hub:
Apache 2.0