Model Profiles, Convergence Engine, and Multi-Model Management
OCCode now includes three powerful new features for managing and optimizing AI model usage:
Centralized registry of 40+ AI models from 11 providers with short aliases, pricing data, and automatic API key detection.
sonnet instead of claude-sonnet-4-20250514Named model configurations with instant switching and prefix shortcuts for rapid context changes.
quick: explain this to use a fast model for one responseMulti-model ensemble orchestration using the Mixture-of-Agents pattern for higher quality outputs.
The Model Catalog is a centralized registry that simplifies working with multiple AI providers and models.
| Provider | Models Available | Environment Variable |
|---|---|---|
| Anthropic | Claude Opus 4, Sonnet 4, Haiku 4.5 | ANTHROPIC_API_KEY |
| OpenAI | GPT-4o, GPT-4, o1, o3-mini, GPT-4o-mini | OPENAI_API_KEY |
| Gemini 2.0 Flash, Gemini 2.5 Pro | GOOGLE_API_KEY | |
| DeepSeek | DeepSeek V3, DeepSeek R1 | DEEPSEEK_API_KEY |
| Mistral | Mistral Large, Codestral | MISTRAL_API_KEY |
| Groq | Llama 3.3 70B, DeepSeek R1 Distill | GROQ_API_KEY |
| Together | Various open models | TOGETHER_API_KEY |
| OpenRouter | 100+ models via unified API | OPENROUTER_API_KEY |
| OpenCan | Custom models | OPENCAN_API_KEY |
| Ollama | Local models (CodeLlama, Qwen, etc.) | None (local) |
| LM Studio | Local models | None (local) |
Use convenient short names instead of full model identifiers:
sonnet
Claude Sonnet 4 - Balanced performance
opus
Claude Opus 4 - Highest quality
haiku
Claude Haiku 4.5 - Fastest, cheapest
gpt4o
GPT-4o - OpenAI flagship
mini
GPT-4o-mini - Fast and cheap
deepseek
DeepSeek V3 - Cost-effective
# See all models in catalog
/converge catalog
# See only models with API keys set
/converge available
# Search for specific capabilities
/converge search code
/converge search fast
/converge search free
The catalog automatically detects which providers you have API keys for by checking standard environment variables. Models from providers without API keys will be marked as unavailable.
Model Profiles let you create named configurations for different AI models and switch between them instantly.
# List available templates
/profile templates
# Create from template
/profile template fast
/profile template power
/profile template creative
# Create custom profile
/profile create myprofile
# You'll be prompted for:
# - Provider (anthropic, openai, etc.)
# - Model (sonnet, gpt4o, etc.)
# - Temperature (0.0-1.0)
# - Max tokens
# - Description
# - Prefix trigger (optional)
| Template | Model | Prefix | Use Case |
|---|---|---|---|
| fast | Claude Haiku | quick: | Quick questions, simple tasks |
| default | Claude Sonnet 4 | - | Balanced quality/speed |
| power | Claude Opus 4 | think: | Complex reasoning, critical tasks |
| creative | Sonnet (high temp) | create: | Creative writing, brainstorming |
| gpt | GPT-4 Turbo | gpt: | Alternative perspective |
| local | Ollama | local: | Privacy, offline work |
# Switch to a profile
/profile fast
/profile power
/profile myprofile
# View current profile and all available
/profile
# Deactivate profile (return to global config)
/profile off
Set a prefix trigger to temporarily use a profile for a single response:
# Set prefix for a profile
/profile prefix fast quick:
/profile prefix power think:
# Use prefix in conversation
quick: what is recursion?
think: design a distributed cache system
When you type a message starting with a registered prefix (e.g., quick: your question), OCCode:
This lets you quickly switch contexts without changing your active profile.
# Edit existing profile
/profile edit myprofile
# Delete profile
/profile delete myprofile
# Set default profile (auto-activates on startup)
/profile default fast
# Export profiles (for sharing with team)
/profile export
# Import profiles
/profile import profiles.json
Profiles are stored in ~/.occode/profiles.json with secure file permissions (0600):
{
"profiles": {
"fast": {
"name": "fast",
"provider": "anthropic",
"model": "claude-haiku-4-5-20251001",
"temperature": 0.3,
"maxTokens": 2048,
"triggerPrefix": "quick:",
"description": "Quick answers, low cost",
"createdAt": "2025-02-01T..."
}
},
"activeProfile": "fast",
"defaultProfile": null
}
The Convergence Engine runs multiple AI models in parallel and combines their outputs for higher quality results.
Instead of asking one model for an answer, convergence:
Convergence uses multiple models, which increases both token usage and API costs. A typical duo-merge preset might use 2-3x the tokens of a single model. Use convergence for important tasks where quality matters more than cost.
| Strategy | How It Works | Best For | API Calls |
|---|---|---|---|
| merge | All models generate โ aggregator synthesizes | High-quality code, complex tasks | N + 1 |
| vote | All models generate โ all vote on best | Multiple valid solutions, picking best | N + N |
| debate | Generate โ critique โ revise (multiple rounds) | Critical decisions, thorough analysis | N ร rounds ร 3 |
| review | One generates โ another reviews โ first revises | Code review workflow, cost-effective | 3-4 |
The easiest way to use convergence is with built-in presets:
# View available presets
/converge preset
# Apply a preset
/converge preset duo-merge
/converge preset code-review
/converge preset trio-merge
# Enable convergence
/converge on
# Now your messages use convergence
write a binary search function
# Disable when done
/converge off
| Preset | Models | Strategy | Description |
|---|---|---|---|
| duo-merge | Sonnet + GPT-4 โ Sonnet | merge | Best balance of quality and cost |
| trio-merge | Sonnet + GPT-4 + Haiku โ Sonnet | merge | Maximum quality, diverse perspectives |
| code-review | Sonnet โ GPT-4 (review) | review | Cost-effective quality assurance |
| debate | Sonnet โ GPT-4 (2 rounds) | debate | Thorough analysis through dialogue |
| vote | 3 models generate + vote | vote | Democratic selection |
| local-merge | CodeLlama + DeepSeek (local) | merge | Private, no API costs |
# Add models using short aliases
/converge add sonnet
/converge add gpt4o
/converge add haiku
# Or specify explicitly
/converge add mymodel anthropic claude-sonnet-4-20250514
# Remove models
/converge remove haiku
# View configured models
/converge models
# Change strategy
/converge strategy merge
/converge strategy vote
/converge strategy debate
/converge strategy review
# View strategy details
/converge strategy
# Set which model synthesizes results (merge strategy)
/converge aggregator sonnet
/converge aggregator gpt4o
# View current aggregator
/converge aggregator
# Set number of debate rounds (1-5)
/converge rounds 3
# View current setting
/converge rounds
# Show individual model outputs (before synthesis)
/converge show on
# Only show final converged output
/converge show off
# View last convergence result with statistics
/converge last
# View current status and configuration
/converge
# Export configuration
/converge export my-config.json
# Import configuration
/converge import my-config.json
# Reset to defaults
/converge reset
# 1. Create a fast profile
/profile template fast
# 2. Verify it was created
/profile
# 3. Activate it
/profile fast
# 4. Verify it's active (should show checkmark)
/profile
Expected Result: Profile is listed with a green โ indicator showing it's active.
# 1. Ensure fast profile has prefix
/profile prefix fast quick:
# 2. Test prefix usage
quick: what is 2 + 2?
# 3. Send regular message
what is 3 + 3?
Expected Result:
# 1. Create power profile
/profile template power
# 2. Switch to power
/profile power
# 3. Verify active
/profile
# 4. Switch back
/profile off
Expected Result: Status shows correct active profile at each step.
# 1. Load a preset
/converge preset duo-merge
# 2. Check configuration
/converge
# 3. Enable convergence
/converge on
# 4. Send a task
write a function to check if a number is prime
# 5. Disable
/converge off
Expected Result:
# 1. Enable individual output display
/converge show on
# 2. Enable convergence
/converge on
# 3. Send task
explain quicksort algorithm
# 4. Observe output
Expected Result:
# Test Review Strategy
/converge preset code-review
/converge on
write a binary search function
/converge off
# Test Vote Strategy
/converge preset vote
/converge on
what's the best sorting algorithm for large datasets?
/converge off
# View last results
/converge last
Expected Result: Each strategy produces different execution patterns visible in the statistics.
# 1. View all available models
/converge available
# 2. Search for specific models
/converge search fast
/converge search code
# 3. Browse full catalog
/converge catalog
# 4. Filter by provider
/converge search anthropic
Expected Result: Models are listed with pricing, capabilities, and availability status (โ or โ based on API key).
# 1. Create profile with specific model
/profile create test_profile
# Choose: anthropic, sonnet, 0.7, 4096
# 2. Activate it
/profile test_profile
# 3. Enable convergence
/converge preset duo-merge
/converge on
# 4. Send task
explain recursion
# 5. Check that convergence works with profile active
/converge
/profile
Expected Result: Convergence uses its configured models (not the active profile). Both systems work independently.
Issue: Profile "myprofile" not found
Solution:
# List all profiles to check spelling
/profile
# Recreate if needed
/profile template fast
Issue: Typing quick: message doesn't activate profile
Solution:
/profile (should show [quick:] next to profile)quick: not quick/profile prefix fast quick:Issue: No API key for anthropic when activating profile
Solution:
# Set API key for the provider
export ANTHROPIC_API_KEY="sk-ant-..."
# Or set via OCCode
occode config --set-key --provider anthropic
Issue: Cannot save profiles
Solution:
# Check and fix permissions
chmod 600 ~/.occode/profiles.json
# Recreate if corrupted
rm ~/.occode/profiles.json
/profile template fast
Issue: No models configured yet when enabling convergence
Solution:
# Use a preset
/converge preset duo-merge
# Or add models manually
/converge add sonnet
/converge add gpt4o
/converge aggregator sonnet
Issue: Model sonnet not available
Solution:
# Check available models
/converge available
# Ensure API key is set
export ANTHROPIC_API_KEY="sk-ant-..."
# Verify in catalog
/converge catalog
Issue: Convergence requests timeout or take very long
Solution:
Issue: Convergence is expensive
Solution:
# Use cost-effective strategy
/converge preset code-review
# Use cheaper models
/converge add haiku
/converge add mini
/converge aggregator haiku
# Disable when not needed
/converge off
# Check cost before running
/converge # Shows last run cost
Issue: Can't see what each model generated
Solution:
# Enable individual output display
/converge show on
# View last convergence details
/converge last
Issue: Unknown model alias: xyz
Solution:
# Search catalog for correct alias
/converge search xyz
# View all available aliases
/converge catalog
# Use explicit form if needed
/converge add myname provider full-model-id
Issue: /converge available shows no models
Solution:
# Check environment variables
env | grep API_KEY
# Set missing API keys
export ANTHROPIC_API_KEY="sk-ant-..."
export OPENAI_API_KEY="sk-..."
# Restart OCCode to pick up new env vars
Issue: /profile or /converge command not found
Solution:
# Verify OCCode version
occode --version
# Should be v0.2.0 or later
# Update by downloading the latest version from opencan.ai/downloads
# Then replace the binary:
tar xzf occode-latest-linux-x64.tar.gz
sudo mv occode /usr/local/bin/
Issue: Features behaving strangely or errors about JSON
Solution:
# Backup existing config
cp ~/.occode/profiles.json ~/.occode/profiles.json.bak
cp ~/.occode/convergence.json ~/.occode/convergence.json.bak
# Remove corrupted configs
rm ~/.occode/profiles.json
rm ~/.occode/convergence.json
# Recreate from scratch
/profile template fast
/converge preset duo-merge
# View built-in help
/help
# Check OCCode documentation
https://opencan.ai/docs
# Contact support
support@opencan.ai
# Community forum
https://opencan.ai/community
| Category | Command | Description |
|---|---|---|
| Profiles | /profile | Show status and list all |
/profile template fast | Create from template | |
/profile <name> | Activate profile | |
quick: message | Use prefix shortcut | |
| Convergence | /converge preset duo-merge | Load preset |
/converge on | Enable convergence | |
/converge off | Disable convergence | |
/converge last | View last result stats | |
| Catalog | /converge catalog | Browse all models |
/converge available | Show available models | |
/converge search <term> | Search models |