📚 OCCode Documentation

Complete Guide to Your AI-Powered Coding Assistant

Version 1.0 | opencan.ai

🎯 Overview

OCCode is a powerful AI-powered coding assistant that runs directly in your terminal. It combines the capabilities of multiple AI models with advanced features like convergence orchestration, model profiles, and intelligent context management.

🤖 Multi-Model Support

Support for 11 providers and 40+ models including Claude, GPT-4, Gemini, DeepSeek, and more.

⚡ Convergence Engine

Run multiple AI models in parallel and synthesize their outputs for higher quality results.

📋 Model Profiles

Save named configurations and switch instantly with prefix shortcuts like "quick:".

💰 Cost Tracking

Real-time token and cost tracking with model-specific pricing across 20+ tiers.

🔄 Checkpoints

Create save points and undo file changes with a single command.

🎨 Rich REPL

Interactive terminal with syntax highlighting, streaming responses, and visual progress indicators.

🚀 Getting Started

Installation

Download OCCode from opencan.ai/downloads (account required), then install:

# Extract the archive (Linux/macOS)
tar xzf occode-0.1.0-linux-x64.tar.gz
sudo mv occode /usr/local/bin/

# Activate with your license key
occode activate --key YOUR-LICENSE-KEY

# Verify installation
occode --version

First Run

# Set your API key
export ANTHROPIC_API_KEY="your-key-here"

# Start OCCode
occode
7-Day Free Trial: OCCode includes a 7-day free trial with full access to all features. Create an opencan.ai account to get started — your account is not charged until day 8. Cancel anytime before day 8 to avoid charges. No refunds after billing.

Basic Usage

# Ask a question
What is recursion?

# Generate code
write a binary search function in TypeScript

# Add files to context
/context src/main.ts

# Get help
/help

👤 Model Profiles NEW

Model Profiles let you create named configurations for different AI models and switch between them instantly using prefix shortcuts.

Creating Profiles

From Built-in Templates

/profile template fast      # Claude Haiku - quick answers
/profile template power     # Claude Opus - complex tasks
/profile template creative  # High temperature Sonnet
/profile template gpt       # GPT-4 Turbo
/profile template local     # Ollama (offline)

Interactive Creation

/profile create myprofile

You'll be prompted for:

Using Profiles

Activate a Profile

/profile fast     # Switch to fast profile
/profile power    # Switch to power profile
/profile off      # Deactivate (use global config)

Prefix Shortcuts

Set a prefix trigger to temporarily use a profile for a single message:

# Set prefix for a profile
/profile prefix fast quick:
/profile prefix power think:

# Use in conversation
quick: what is recursion?
think: design a distributed cache system
How it works: OCCode detects the prefix, switches to that profile's model, sends your message, then restores your previous model.

Managing Profiles

/profile - Show status and list all profiles /profile edit myprofile - Edit existing profile /profile delete myprofile - Delete a profile /profile default fast - Set default profile (auto-activates on startup) /profile export - Export profiles for sharing /profile import profiles.json - Import profiles from team

Built-in Templates

Template Model Prefix Use Case
fast Claude Haiku quick: Quick questions, simple tasks
default Claude Sonnet 4 - Balanced quality/speed
power Claude Opus 4 think: Complex reasoning, critical tasks
creative Sonnet (high temp) create: Creative writing, brainstorming
gpt GPT-4 Turbo gpt: Alternative perspective
local Ollama local: Privacy, offline work

⚡ Convergence Engine NEW

The Convergence Engine runs multiple AI models in parallel and combines their outputs using the Mixture-of-Agents pattern for higher quality results.

How It Works

  1. Generate - Multiple models work on the task simultaneously
  2. Synthesize - An aggregator model combines the best parts
  3. Return - A single, higher-quality response
Cost Consideration: Convergence uses multiple models, which increases token usage and API costs (2-3x for typical duo-merge preset). Use for important tasks where quality matters more than cost.

Quick Start with Presets

# View available presets
/converge preset

# Apply a preset
/converge preset duo-merge

# Enable convergence
/converge on

# Your messages now use convergence
write a binary search function

# Disable when done
/converge off

Built-in Presets

Preset Models Strategy Description
duo-merge Sonnet + GPT-4 → Sonnet merge Best balance of quality and cost
trio-merge Sonnet + GPT-4 + Haiku → Sonnet merge Maximum quality, diverse perspectives
code-review Sonnet → GPT-4 (review) review Cost-effective quality assurance
debate Sonnet ↔ GPT-4 (2 rounds) debate Thorough analysis through dialogue
vote 3 models generate + vote vote Democratic selection
local-merge CodeLlama + DeepSeek (local) merge Private, no API costs

Convergence Strategies

Merge Strategy (MoA)

All participant models generate responses in parallel, then an aggregator synthesizes the best parts.

Best for: Code generation, complex explanations, creative tasks

Vote Strategy

All models generate responses, then each votes for the best (not their own).

Best for: Tasks with objective correct answers, choosing between options

Debate Strategy

Multi-round critique and refinement - models critique each other, then revise their responses.

Best for: Important decisions, thorough analysis, exploring trade-offs

Review Strategy

Pair programming workflow - Model A generates, Model B reviews, Model A revises.

Best for: Code quality, catching bugs, cost-effective quality boost

Custom Configuration

/converge add sonnet - Add model using short alias /converge add gpt4o - Add another model /converge remove haiku - Remove a model /converge strategy merge - Set convergence strategy /converge aggregator sonnet - Set which model synthesizes results /converge rounds 3 - Set debate rounds (1-5) /converge show on - Show individual model outputs /converge models - View configured models /converge last - View statistics from last run

Cost Optimization Tips

  1. Use for Important Tasks: Reserve convergence for critical code or complex problems
  2. Review Strategy: Most cost-effective (only 3-4 API calls)
  3. Disable show: Viewing individual outputs doesn't affect quality
  4. Local Models: Use local-merge preset for zero API costs (requires Ollama)
  5. Duo vs Trio: duo-merge is 33% cheaper than trio-merge

📖 Model Catalog NEW

The Model Catalog is a centralized registry of 40+ AI models from 11 providers with short aliases, pricing data, and automatic API key detection.

What It Provides

Short Aliases

Use sonnet instead of claude-sonnet-4-20250514

Auto-Detection

Automatically finds available models based on your API keys

Cost Tracking

Real-time pricing information for accurate cost estimates

Model Search

Find models by capability, price, or provider

Supported Providers

Provider Models Available Environment Variable
Anthropic Claude Opus 4, Sonnet 4, Haiku 4.5 ANTHROPIC_API_KEY
OpenAI GPT-4o, GPT-4, o1, o3-mini, GPT-4o-mini OPENAI_API_KEY
Google Gemini 2.0 Flash, Gemini 2.5 Pro GOOGLE_API_KEY
DeepSeek DeepSeek V3, DeepSeek R1 DEEPSEEK_API_KEY
Mistral Mistral Large, Codestral MISTRAL_API_KEY
Groq Llama 3.3 70B, DeepSeek R1 Distill GROQ_API_KEY
Together Various open models TOGETHER_API_KEY
OpenRouter 100+ models via unified API OPENROUTER_API_KEY
OpenCan Custom models OPENCAN_API_KEY
Ollama Local models (CodeLlama, Qwen, etc.) None (local)
LM Studio Local models None (local)

Short Aliases - Anthropic

Alias Full Model ID Tier Use Case
opus claude-opus-4-20250514 flagship Highest quality, complex reasoning
sonnet claude-sonnet-4-20250514 balanced Best balance of speed/quality
haiku claude-haiku-4-5-20251001 fast Quick responses, low cost

Short Aliases - OpenAI

Alias Full Model ID Tier Use Case
gpt4o gpt-4o flagship General purpose, multimodal
gpt4 gpt-4-turbo flagship Complex tasks, long context
mini gpt-4o-mini fast Fast, cost-effective
o1 o1 flagship Advanced reasoning
o3mini o3-mini balanced Efficient reasoning

Browsing Models

# See complete catalog
/converge catalog

# Search by keyword
/converge search code
/converge search fast
/converge search free

# Show only models with API keys set
/converge available

Model Tiers

Tier Description Examples Use Case
flagship Highest quality, most capable Opus, GPT-4, Gemini Pro Critical tasks, complex reasoning
balanced Best quality/cost ratio Sonnet, o3-mini General development work
fast Quick responses, lower cost Haiku, mini, Gemini Flash Simple questions, iteration
economy Maximum cost efficiency Local models High volume, privacy needs

📁 Context Management

OCCode provides powerful tools for managing which files and information the AI can access.

Basic Context Commands

/context - Show current context overview /context add <file> - Add file to context /context remove <file> - Remove file from context /context src/**/*.ts - Add files using glob patterns

Persistent Context (Pinning)

Pin files to keep them in context across all sessions:

/pin src/config.ts
/pin README.md
/unpin src/config.ts

Exclusion Patterns

Exclude files or directories from being added to context:

/exclude node_modules
/exclude "*.test.ts"
/exclude "**/tests/**"
/include node_modules  # Remove exclusion

Context Overview

The /context command shows:

⌨️ Command Reference

Provider & Model

/provider [name] - Show current or switch provider (anthropic, openai, google, etc.) /model [name] - Show current or change model (sonnet, gpt4o, etc.) /api_key [key] - Show status or set API key /api_url [url] - Show or change custom API endpoint

Model Profiles

/profile - Show active profile and list all /profile <name> - Activate profile /profile create <name> - Create new profile /profile template <name> - Create from template /profile edit <name> - Edit profile /profile delete <name> - Delete profile /profile off - Deactivate profile /profile default <name> - Set default profile /profile prefix <name> <prefix> - Set message prefix /profile export - Export profiles /profile import <file> - Import profiles

Convergence

/converge - Show status /converge on|off - Enable/disable /converge strategy <name> - Set strategy (merge/vote/debate/review) /converge add <alias> - Add model /converge remove <name> - Remove model /converge aggregator <name> - Set aggregator /converge preset <name> - Load preset /converge rounds <n> - Set debate rounds /converge show on|off - Toggle individual outputs /converge catalog - Browse models /converge available - Show models with API keys /converge search <keyword> - Search models

Context Management

/context - Show context overview /context add <file> - Add file /context remove <file> - Remove file /pin <file> - Pin file (persistent) /unpin <file> - Unpin file /exclude <pattern> - Exclude files /include <pattern> - Remove exclusion

Session Management

/clear - Clear conversation history /status - Show session statistics /cost - Show cost/token breakdown /compact - Compact history to save tokens /export <file> - Export session /debug - Toggle debug mode

Git Integration

/git - Show git status /commit - Generate and apply commit /diff <files> - Show enhanced diff

Checkpoints & Undo

/checkpoint [message] - Create checkpoint /checkpoint list - List checkpoints /checkpoint restore <id> - Restore checkpoint /undo - Undo last file change

Daemon & Indexing

/index - Show indexing status /index rebuild - Force rebuild index /daemon - Show daemon status /daemon start - Start daemon /daemon stop - Stop daemon

Execution Mode

/mode interactive - Approve each action /mode auto - Execute without approval

Help

/help - Show all commands

🔧 Git Integration

Git Status

/git

Shows:

AI-Powered Commits

/commit

OCCode will:

  1. Analyze git diff
  2. Generate descriptive commit message
  3. Show you the message for approval
  4. Create the commit if approved

Enhanced Diff

/diff src/main.ts
/diff --side-by-side src/main.ts src/utils.ts
/diff --no-syntax src/**/*.ts

Features:

💰 Cost & Token Tracking

OCCode provides real-time cost tracking with model-specific pricing across 20+ pricing tiers.

View Costs

/status  # Quick overview
/cost    # Detailed breakdown

What's Tracked

Convergence Cost Tracking

When using convergence, costs are tracked for:

Cost Estimation: Convergence with duo-merge preset typically uses 2-3x the tokens of a single model. The /converge last command shows exact costs from your last run.

💾 Checkpoints & Undo

Creating Checkpoints

Save a snapshot of all file states:

/checkpoint "Before refactoring auth module"
/checkpoint "Working state before experiment"

Managing Checkpoints

/checkpoint list              # List all checkpoints
/checkpoint restore cp_123    # Restore a specific checkpoint

Quick Undo

Undo the last file change with a single command:

/undo

What's Saved

Checkpoints include:

Storage Location: Checkpoints are stored in ~/.occode/checkpoints/. They persist across sessions but can accumulate disk space over time.

🤖 AI Providers

OCCode supports 11 AI providers with 40+ models.

Setting Up Providers

Via Environment Variables (Recommended)

# Anthropic (Claude)
export ANTHROPIC_API_KEY="sk-ant-..."

# OpenAI (GPT)
export OPENAI_API_KEY="sk-..."

# Google (Gemini)
export GOOGLE_API_KEY="..."

# DeepSeek
export DEEPSEEK_API_KEY="..."

# Mistral
export MISTRAL_API_KEY="..."

# Groq
export GROQ_API_KEY="..."

Via OCCode Commands

/provider anthropic
/api_key sk-ant-your-key-here

Switching Providers

/provider anthropic
/model sonnet

/provider openai
/model gpt4o

/provider google
/model gemini-pro

Custom Endpoints

For self-hosted or custom API endpoints:

/api_url https://your-custom-api.com/v1
/api_url reset  # Reset to default

Provider Features Comparison

Provider Tool Use Vision Streaming Cost
Anthropic Medium-High
OpenAI Medium-High
Google Low-Medium
DeepSeek Very Low
Ollama (Local) Varies Varies Free

⚙️ Configuration

Configuration Files

OCCode stores configuration in ~/.occode/:

Global Configuration

Edit ~/.occode/config.json:

{
  "provider": "anthropic",
  "model": "claude-sonnet-4-20250514",
  "apiEndpoint": null,
  "maxTokens": 4096,
  "temperature": 0.7,
  "autoApprove": false,
  "theme": "auto"
}

Profile Configuration

Profiles are stored in ~/.occode/profiles.json:

{
  "profiles": {
    "fast": {
      "name": "fast",
      "provider": "anthropic",
      "model": "claude-haiku-4-5-20251001",
      "temperature": 0.3,
      "maxTokens": 2048,
      "triggerPrefix": "quick:",
      "description": "Quick answers, low cost"
    }
  },
  "activeProfile": "fast",
  "defaultProfile": null
}

Environment Variable Priority

Configuration priority (highest to lowest):

  1. Active model profile
  2. Environment variables (OCCODE_*)
  3. Global config file
  4. Built-in defaults

🔍 Troubleshooting

API Key Issues

Problem: "No API key found"

Solution:

# Check if environment variable is set
echo $ANTHROPIC_API_KEY

# Set via OCCode
/api_key your-key-here

# Or set environment variable
export ANTHROPIC_API_KEY="your-key"

Problem: "Invalid API key"

Solution:

Model Profile Issues

Problem: "Profile not found"

Solution:

# List all profiles to check spelling
/profile

# Recreate if needed
/profile template fast

Problem: "Prefix not triggering"

Solution:

Convergence Issues

Problem: "No models configured"

Solution:

# Use a preset
/converge preset duo-merge

# Or add models manually
/converge add sonnet
/converge add gpt4o
/converge aggregator sonnet

Problem: "High costs"

Solution:

# Use cost-effective strategy
/converge preset code-review

# Use cheaper models
/converge add haiku
/converge add mini
/converge aggregator haiku

# Disable when not needed
/converge off

Context Issues

Problem: "Too many tokens in context"

Solution:

# Remove large files
/context remove large-file.json

# Exclude test files
/exclude "*.test.ts"
/exclude "**/tests/**"

# Compact conversation history
/compact

General Issues

Problem: "Command not working"

Solution:

Problem: "Session won't resume"

Solution:

# Check sessions directory
ls ~/.occode/sessions/

# Start fresh session
occode --new-session

Getting Help

Login Admin Dashboard
Products and Docs
OCCode CLI User Guide OCCode CLI Admin Guide OCCode CLI Install guide OCCode CLI Full DOcumentation OCCode Pricing
Blog
AI Articles Technical Topics Applications
About Us Terms of Service Privacy Policy Contact Us