Configuration
Clarissa can be configured via a config file or environment variables. It supports multiple LLM providers including cloud APIs and local inference.
LLM Providers
Clarissa supports the following providers:
- OpenRouter - Access to 100+ models (Claude, GPT-4, Gemini, Llama, etc.)
- OpenAI - Direct GPT API access
- Anthropic - Direct Claude API access
- Apple Intelligence - On-device AI for macOS 26+ with Apple Silicon
- LM Studio - Local inference via LM Studio desktop app
- Local Llama - Direct GGUF model inference via node-llama-cpp
API Key Setup
Set up at least one provider. Get API keys from:
Option 1: Interactive Setup (Recommended)
Run the setup command:
clarissa init This will prompt for API keys for each provider and create the config file automatically.
Option 2: Environment Variables (Backup)
Environment variables can be used as a backup when config file keys are not set:
export OPENROUTER_API_KEY=your_openrouter_key
export OPENAI_API_KEY=your_openai_key
export ANTHROPIC_API_KEY=your_anthropic_key Configuration Options
Settings are specified in the config file. Environment variables serve as fallbacks.
| Config Key | Env Fallback | Description |
|---|---|---|
openrouterApiKey | OPENROUTER_API_KEY | OpenRouter API key |
openaiApiKey | OPENAI_API_KEY | OpenAI API key |
anthropicApiKey | ANTHROPIC_API_KEY | Anthropic API key |
maxIterations | MAX_ITERATIONS | Maximum tool iterations (default: 10) |
debug | DEBUG | Enable debug logging (default: false) |
Provider Selection
Clarissa automatically detects available providers in this priority order:
- OpenRouter - If OpenRouter API key is configured
- OpenAI - If OpenAI API key is configured
- Anthropic - If Anthropic API key is configured
- Apple Intelligence - If on macOS 26+ with Apple Silicon
- LM Studio - If LM Studio is running with a model loaded
- Local Llama - If a local model path is configured
Switch providers at runtime with the /provider command. Your selection persists across sessions.
Example Config File
{
"openrouterApiKey": "sk-or-...",
"openaiApiKey": "sk-...",
"anthropicApiKey": "sk-ant-...",
"maxIterations": 10,
"debug": false,
"mcpServers": {
"filesystem": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
}
},
"localLlama": {
"modelPath": "~/.clarissa/models/qwen3-8b-q4_k_m.gguf",
"gpuLayers": -1,
"contextSize": 8192,
"flashAttention": true
}
} Local Model Configuration
For direct GGUF model inference, configure localLlama in your config:
| Property | Type | Description |
|---|---|---|
modelPath | string | Path to the GGUF model file |
gpuLayers | number | Number of layers to offload to GPU (-1 for all) |
contextSize | number | Context window size (default: 8192) |
flashAttention | boolean | Enable flash attention for faster inference |
Download models with clarissa download to get recommended GGUF models.
MCP Servers
Configure MCP (Model Context Protocol) servers to extend Clarissa with additional tools.
Servers defined in mcpServers are automatically loaded on startup.
| Property | Type | Description |
|---|---|---|
command | string | Command to run the MCP server |
args | string[] | Arguments to pass to the command |
env | object | Environment variables for the server |
Use /mcp to view connected servers and /tools to see available tools.
Available Models
Clarissa supports any model available through OpenRouter. Some popular options include:
anthropic/claude-sonnet-4- Claude Sonnet 4 (default)anthropic/claude-opus-4- Claude Opus 4openai/gpt-4o- GPT-4ogoogle/gemini-2.0-flash-001- Gemini 2.0 Flashmeta-llama/llama-3.3-70b-instruct- Llama 3.3 70Bdeepseek/deepseek-chat- DeepSeek Chat
You can switch models at runtime using the /model command.
See the full list at OpenRouter Models.
CLI Options
Clarissa accepts command-line arguments for one-shot mode and configuration:
| Option | Description |
|---|---|
-c, --continue | Continue the last session |
-m, --model MODEL | Use a specific model for this session |
--list-models | List available models and exit |
--check-update | Check for available updates |
--debug | Enable debug output |
-h, --help | Show help message |
-v, --version | Show version number |
CLI Commands
Clarissa provides standalone commands for setup and management:
| Command | Description |
|---|---|
clarissa init | Set up Clarissa with your API keys interactively |
clarissa upgrade | Upgrade to the latest version |
clarissa providers [NAME] | List available providers or switch to one |
clarissa download [ID] | Download GGUF models for local inference |
clarissa models | List downloaded local models |
clarissa use <FILE> | Set a downloaded model as active for local-llama provider |
clarissa config | View current configuration (with masked API keys) |
clarissa history | Show one-shot query history |
clarissa app "<message>" | Open the native macOS app with an optional question (Apple Intelligence) |
Data Storage
Clarissa stores data in your home directory at ~/.clarissa/:
| Path | Description |
|---|---|
~/.clarissa/config.json | Configuration file (API keys, model, provider settings) |
~/.clarissa/sessions/ | Saved conversation sessions (JSON files) |
~/.clarissa/memories.json | Persistent memories across sessions |
~/.clarissa/preferences.json | User preferences (last provider, model, etc.) |
~/.clarissa/models/ | Downloaded GGUF models for local inference |
~/.clarissa/history.json | One-shot query history |
~/.clarissa/update-check.json | Update check cache (24-hour interval) |
Debug Mode
Enable debug mode to see detailed logging of API calls and tool executions:
DEBUG=true clarissa Or use the CLI flag:
clarissa --debug Next Steps
Now that you're configured, learn about the built-in tools or jump straight to using Clarissa.