Documentation

Octogent Documentation

Octogent is an autonomous multi-agent AI system you run on your own devices. It executes tasks in parallel across 8 worker slots, connects to multiple LLM backends (Ollama local, Groq cloud), and provides CLI-based control for monitoring and management.

Quick Start

Runtime: Node >= 22

$npx octogent@latest init
$octogent config --model llama3.2:8b --threads 8
$octogent start
$octogent task "Build a REST API for user management"

Core Features

8-Slot Parallel Worker Pool
Execute up to 8 tasks simultaneously using Node.js worker_threads
Multi-LLM Backend
Ollama (local, free) with Groq (cloud) fallback
Autonomous Agent Loop
Think → Act → Observe cycle with automatic tool selection
Skills System
JSON-based skill definitions (Coder, Researcher, Writer, DevOps)
Memory Persistence
SQLite-backed long-term memory across sessions
Native SDKs
iOS/macOS Swift SDK and Android Kotlin SDK support

Built-in Tools

ToolDescription
bashExecute shell commands with timeout and sandboxing
read_fileRead files with line range support
write_fileWrite/create files with directory creation
list_dirList directory contents with metadata
web_searchSearch the web via SearXNG
web_fetchFetch and parse web pages
memory_saveSave information to persistent memory
memory_readQuery persistent memory
spawn_agentSpawn sub-agents for parallel work
check_taskCheck status of spawned tasks

Architecture

CLI / Webhook / Cron | v +-------------------------------+ | Gateway | | (WebSocket + REST) | | ws://127.0.0.1:18789 | +---------------+---------------+ | v +-------------------------------+ | Worker Pool | | (8 parallel worker_threads)| +---------------+---------------+ | +-----------+-----------+ v v v +-------+ +-------+ +-------+ |Worker | |Worker | |Worker | ... | #1 | | #2 | | #3 | +---+---+ +---+---+ +---+---+ | | | v v v +-------------------------------+ | Agent Loop | | (Think -> Act -> Observe) | +---------------+---------------+ | +-----------+-----------+ v v v +-------+ +-------+ +-------+ |Ollama | | Groq | | Tools | |(local)| |(cloud)| | (10+) | +-------+ +-------+ +-------+

Configuration

Minimal octogent.config.json:

{
  "llm": {
    "primary": {
      "provider": "ollama",
      "model": "llama3.2:8b-instruct-q8_0",
      "baseUrl": "http://localhost:11434"
    },
    "fallback": {
      "provider": "groq",
      "model": "llama-3.3-70b-versatile"
    }
  },
  "workers": {
    "poolSize": 8,
    "maxIterations": 25
  }
}

CLI Commands

# Initialize Octogent
$octogent init
# Configure settings
$octogent config --model <model> --threads <1-8>
# Start the agent
$octogent start
# Task management
$octogent task "<prompt>"
$octogent tasks list
$octogent tasks cancel <id>
# Worker status
$octogent workers
# Health check
$octogent doctor

Environment Variables

OLLAMA_BASE_URLhttp://localhost:11434
GROQ_API_KEYyour-groq-api-key
OCTOGENT_PORT18789
OCTOGENT_HOST127.0.0.1
DATABASE_PATH./data/octogent.db
SEARXNG_URLhttp://localhost:8080
REDIS_URLredis://localhost:6379

Built-in Skills

Coder
Expert software developer for writing, debugging, and refactoring code.
Researcher
Information gathering specialist for web research and data analysis.
Writer
Technical and creative writing expert for documentation and content.
DevOps
Infrastructure and deployment specialist for CI/CD and system administration.

Create custom skills in workspace/skills/<skill-name>.json

Docker Setup

# Start all services
$docker-compose up -d
# View logs
$docker-compose logs -f
# Stop services
$docker-compose down

Services: Octogent Server (Port 18789), Redis (Port 6379), Ollama (Port 11434), SearXNG (Port 8080)

Troubleshooting

Run the doctor command to diagnose issues:

$octogent doctor

Common issues: Ollama not running (start with ollama serve), Model not found (pull with octogent models pull llama3.2:8b), Port in use (change port in config), Memory issues (reduce workers.poolSize or use smaller model).

Contact

Email: Octogent@pm.me
GitHub: github.com/OctogentAI/Octogent
Website: www.octogent.com