Skip to main content
SoulForge uses layered configuration: global defaults plus per-project overrides. Every setting can be changed at runtime and persisted to either scope.

Config files

ScopePathPurpose
Global~/.soulforge/config.jsonDefaults for all projects
Project.soulforge/config.jsonProject-specific overrides
Project settings override global. Session-level settings (changed via commands but not saved) override both.

Config example

{
  "defaultModel": "anthropic/claude-sonnet-4-6",
  "thinking": { "mode": "adaptive" },
  "repoMap": true,
  "semanticSummaries": "ast",
  "diffStyle": "default",
  "chatStyle": "accent",
  "vimHints": true,
  "compaction": {
    "strategy": "v2",
    "triggerThreshold": 0.7,
    "keepRecent": 4
  },
  "taskRouter": {
    "planning": "anthropic/claude-sonnet-4-6",
    "coding": "anthropic/claude-opus-4-6",
    "exploration": "anthropic/claude-sonnet-4-6",
    "webSearch": "anthropic/claude-haiku-3-5",
    "trivial": "anthropic/claude-haiku-3-5",
    "compact": "google/gemini-2.0-flash"
  },
  "agentFeatures": {
    "desloppify": true,
    "tierRouting": true,
    "dispatchCache": true,
    "targetFileValidation": true
  },
  "providers": []
}

Key fields

Model settings

FieldTypeDefaultDescription
defaultModelstring"none"Active model ID (e.g. "anthropic/claude-sonnet-4-6"). "none" forces model selection on launch.
thinkingobjectThinking mode config: {mode: "off" | "adaptive" | "enabled", budget?: number}

Display

FieldTypeDefaultDescription
diffStylestring"default"Diff display: "default", "sidebyside", "compact"
chatStylestring"accent"Chat layout: "accent", "bubble"
vimHintsbooleantrueShow Neovim keybinding hints
nerdFontbooleantrueShow Nerd Font icons

Intelligence

FieldTypeDefaultDescription
repoMapbooleantrueEnable repo map scanning
semanticSummariesstring"ast"Summary mode: "ast" (tree-sitter), "llm" (LLM-generated), "off"

Compaction

FieldTypeDefaultDescription
compaction.strategystring"v2""v2" (incremental extraction) or "v1" (LLM summarization)
compaction.triggerThresholdnumber0.7Auto-compact at this % of context
compaction.resetThresholdnumber0.4Hysteresis reset threshold
compaction.keepRecentnumber4Recent messages to preserve verbatim
compaction.maxToolResultsnumber30Rolling tool result window (V2)
compaction.llmExtractionbooleantrueLLM gap-fill on compact (V2)
See Compaction for strategy details.

Task router

Assign different models to different task types. Configure via /router in the TUI or in config:
{
  "taskRouter": {
    "planning": "anthropic/claude-sonnet-4-6",
    "coding": "anthropic/claude-opus-4-6",
    "exploration": "anthropic/claude-sonnet-4-6",
    "webSearch": "anthropic/claude-haiku-3-5",
    "semantic": "anthropic/claude-haiku-3-5",
    "trivial": "anthropic/claude-haiku-3-5",
    "desloppify": "anthropic/claude-haiku-3-5",
    "compact": "google/gemini-2.0-flash",
    "default": null
  }
}
SlotPurpose
planningPlan mode, architecture decisions
codingFile edits, implementation
explorationRead-only research, code analysis
webSearchWeb search agent model
semanticRepo map semantic summaries
trivialSingle-file reads, small edits (auto-detected)
desloppifyCleanup pass after code agents
compactContext compaction summarizer
defaultFallback for unmatched tasks
Resolution order: taskRouter[taskType] then taskRouter.default then active model.

Agent features

Toggle via /agent-features or in config:
{
  "agentFeatures": {
    "desloppify": true,
    "tierRouting": true,
    "dispatchCache": true,
    "targetFileValidation": true
  }
}
FeatureDefaultDescription
desloppifytrueCleanup agent after code agents (requires desloppify model in router)
tierRoutingtrueAuto-classify trivial tasks and route to cheap model
dispatchCachetrueCache file reads across dispatch boundaries
targetFileValidationtrueRequire file paths on dispatch tasks

Custom providers

Add any OpenAI-compatible API as a provider:
{
  "providers": [
    {
      "id": "deepseek",
      "name": "DeepSeek",
      "baseURL": "https://api.deepseek.com/v1",
      "envVar": "DEEPSEEK_API_KEY",
      "models": ["deepseek-chat", "deepseek-coder"],
      "modelsAPI": "https://api.deepseek.com/v1/models"
    }
  ]
}
FieldRequiredDescription
idYesProvider ID, used in model strings like deepseek/deepseek-chat
nameNoDisplay name (defaults to id)
baseURLYesOpenAI-compatible API endpoint
envVarNoEnv var name for the API key
modelsNoFallback model list (strings or {id, name, contextWindow} objects)
modelsAPINoURL to fetch models dynamically (OpenAI /v1/models format)
If a custom provider id matches a built-in (e.g. "anthropic"), it auto-renames to {id}-custom. The built-in is never replaced.

Provider examples

Local LLM server (no API key):
{
  "providers": [{
    "id": "local",
    "name": "Local LLM",
    "baseURL": "http://localhost:8080/v1",
    "models": ["llama-3-70b"]
  }]
}
Corporate proxy:
{
  "providers": [{
    "id": "corp",
    "name": "Corp API Gateway",
    "baseURL": "https://llm.internal.corp.com/v1",
    "envVar": "CORP_LLM_KEY",
    "modelsAPI": "https://llm.internal.corp.com/v1/models"
  }]
}
Multiple providers:
{
  "providers": [
    { "id": "deepseek", "baseURL": "https://api.deepseek.com/v1", "envVar": "DEEPSEEK_API_KEY", "models": ["deepseek-chat"] },
    { "id": "together", "baseURL": "https://api.together.xyz/v1", "envVar": "TOGETHER_API_KEY", "models": ["meta-llama/Llama-3-70b-chat-hf"] },
    { "id": "groq", "baseURL": "https://api.groq.com/openai/v1", "envVar": "GROQ_API_KEY", "modelsAPI": "https://api.groq.com/openai/v1/models" }
  ]
}

Project instructions

SoulForge loads SOULFORGE.md from your project root as project-specific instructions. You can also load instruction files from other AI tools:
FileSourceDefault
SOULFORGE.mdSoulForgeon
CLAUDE.mdClaude Codeoff
.cursorrulesCursoroff
.github/copilot-instructions.mdGitHub Copilotoff
.clinerulesClineoff
.windsurfrulesWindsurfoff
.aider.conf.ymlAideroff
AGENTS.mdOpenAI Codexoff
.opencode/instructions.mdOpenCodeoff
AMPLIFY.mdAmpoff
Toggle via /instructions in the TUI or set in config:
{ "instructionFiles": ["soulforge", "claude", "cursorrules"] }

Scoped configuration

Every setting can be saved to one of three scopes:
ScopePersistencePriority
SessionLost on exitHighest
Project.soulforge/config.jsonMedium
Global~/.soulforge/config.jsonLowest
Use /model-scope to toggle model persistence between project and global.

Privacy / forbidden files

Block files from AI access with /privacy add <pattern>:
/privacy add .env
/privacy add secrets/**
  • Project scope — patterns in .soulforge/forbidden
  • Global scope — patterns in ~/.soulforge/forbidden
  • Built-in patterns cover .env, .pem, credentials, private_key, id_rsa, .npmrc, .netrc, shadow, passwd, and more

API keys

Keys are stored in the OS keychain (macOS Keychain, Linux secret-tool) via --set-key:
soulforge --set-key anthropic sk-ant-...
soulforge --set-key openai sk-...
Alternatively, set env vars in your shell profile:
export ANTHROPIC_API_KEY=sk-ant-...
export OPENAI_API_KEY=sk-...
export GOOGLE_GENERATIVE_AI_API_KEY=...
ProviderEnv Variable
AnthropicANTHROPIC_API_KEY
OpenAIOPENAI_API_KEY
GoogleGOOGLE_GENERATIVE_AI_API_KEY
xAIXAI_API_KEY
OpenRouterOPENROUTER_API_KEY
LLM GatewayLLM_GATEWAY_API_KEY
Vercel AI GatewayAI_GATEWAY_API_KEY
Ollama(none — runs locally)

Storage

/storage shows per-component disk usage:
  • Repo map index (SQLite)
  • Sessions (JSONL)
  • Plans
  • Memory (SQLite)
  • Input history (SQLite)
  • Config files
  • Binaries (CLIProxyAPI, bundled tools)
  • Fonts (Nerd Font symbols)
One-click cleanup for each component.