Skip to content

termos-dev/termos

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

679 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Termos

Agent orchestration for messaging platforms. Run AI coding agents (Claude Code, Codex, OpenClaw) in sandboxed containers, accessible via Slack, WhatsApp, or API.

Try It Now

No setup required - chat with our hosted agents:

How It Works

┌─────────────────┐     ┌─────────────┐     ┌──────────────────┐
│  Slack/WhatsApp │────▶│   Gateway   │────▶│  Worker (Agent)  │
│     Thread      │◀────│             │◀────│  Claude Code     │
└─────────────────┘     └──────┬──────┘     └──────────────────┘
                               │
                        ┌──────▼──────┐
                        │    Redis    │
                        │   (state)   │
                        └─────────────┘

Key concepts:

  • Session = Thread - Each conversation thread gets its own isolated agent container
  • Short-lived tokens - Platform/channel-specific tokens shared with workers for secure API access
  • Persistent volumes - Container workspaces survive restarts and scale-to-zero events
  • Network isolation - Workers run in sandboxed networks with configurable domain allowlists

Deployment Modes

Mode Use Case Orchestration
Kubernetes Production Helm chart, auto-scaling, PVCs
Docker Development Docker Compose, local volumes
Local Testing Child processes, sandbox runtime

Quick Start (Self-Hosted)

# Create a new bot
npm create termos my-bot

# Configure and start
cd my-bot
cp .env.example .env  # Add your tokens
npm run dev

API

Full API documentation: termos.dev/api

Start a Session

curl -X POST https://your-gateway/api/v1/agents/{agentId}/sessions \
  -H "Authorization: Bearer $TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Create a Python script that...", "model": "claude-sonnet-4-20250514"}'

Configuration Options

{
  "prompt": "Your task...",
  "model": "claude-sonnet-4-20250514",
  "workingDirectory": "/workspace/project",
  "networkConfig": {
    "allowedDomains": ["github.com", "api.openai.com"]
  },
  "mcpConfig": {
    "servers": { ... }
  }
}

Architecture

Gateway

  • Manages platform connections (Slack Socket Mode, WhatsApp Baileys)
  • Routes messages to worker containers
  • Handles OAuth flows for MCP servers
  • Streams responses back to users

Workers

  • Isolated containers running AI agents
  • Currently supports Claude Code CLI
  • Future: Codex, OpenClaw, custom agents
  • MCP server support with OAuth proxy

Session Management

  • Redis-backed state persistence
  • Thread-to-session mapping
  • Automatic cleanup of idle sessions
  • Turn counting to prevent infinite loops

Features

  • Multi-platform - Slack, WhatsApp, REST API
  • Sandboxed execution - Network isolation, domain allowlists
  • Persistent workspaces - Git repos, files survive restarts
  • MCP OAuth - Authenticate external services via home tab
  • Custom workers - Extend base image with your tools

Security, Sandboxing, and Privacy

Sandboxing modes:

  • Kubernetes/Docker - Each session runs in its own container with isolated filesystem, network, and resource limits. Outbound traffic is restricted by allowlists.
  • Local - Workers run as child processes with optional OS-level sandboxing via the Anthropic Sandbox Runtime (controlled by SANDBOX_ENABLED=true|false|unset).

Network egress and data flow:

  • Workers do not have direct internet access. All outbound requests go through the gateway’s HTTP proxy, which enforces domain allowlists.
  • The gateway is the only egress point and the only component that talks to external providers.

MCP proxy and sensitive data:

  • OAuth flows are handled by the gateway. Provider tokens and client secrets stay on the gateway side.
  • Workers receive short-lived, scoped tokens and call MCP servers through the gateway proxy.
  • Agents never receive Slack/WhatsApp tokens or other platform secrets.

Reliability and Experience

  • Cloud agents, not local - Unlike OpenClaw’s local execution, Termos runs agents on managed cloud workers.
  • Your own computer, preserved - Each thread gets a persistent workspace (your tools, repos, and files stay intact).
  • Stateful by default - Sessions resume after restarts and scale-to-zero events.
  • Optional browser control - Integrate Owletto when you want the agent to drive a browser.

Worker Customization

FROM buremba/termos-worker-base:latest

# Add your tools
RUN pip install pandas matplotlib
RUN apt-get update && apt-get install -y postgresql-client

# Add custom instructions
COPY CLAUDE.md /workspace/

Self-Hosting

Requirements

  • Redis (for state and queues)
  • Docker or Kubernetes
  • Platform tokens (Slack Bot/App tokens, or WhatsApp session)

Environment Variables

# Required
QUEUE_URL=redis://localhost:6379
SLACK_BOT_TOKEN=xoxb-...
SLACK_APP_TOKEN=xapp-...

# Optional
DEPLOYMENT_MODE=kubernetes|docker|local
WORKER_ALLOWED_DOMAINS=github.com,api.example.com
PUBLIC_GATEWAY_URL=https://your-domain.com

Kubernetes Deployment

helm repo add termos https://charts.termos.dev
helm install termos termos/termos -f values.yaml

Contributing

# Clone and install
git clone https://github.com/termos-dev/termos
cd termos && bun install

# Development
make dev              # Start gateway
./scripts/test-bot.sh "@me hello"  # Test

# Run tests
bun run test

Packages

NPM:

Docker Hub:

License

Apache 2.0

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 38

Languages