* initial commit
* removes assesment from docs
* resolves automated review comments
* resolves lint , type , tests , refactors , and submits
* solves : why do we have to lint the tests xD
* adds greptile fixes
* solves a type error
* solves a ci error
* refactors auths
* solves a failing test after i pulled from main lol
* solves a failing test after i pulled from main lol
* resolves token naming issue to comply with better practices when using hf / huggingface
* fixes curly lints !
* fixes failing tests for google api from main
* solve merge conflicts
* solve failing tests with a defensive check 'undefined' openrouterapi key
* fix: preserve Hugging Face auth-choice intent and token behavior (#13472) (thanks @Josephrp)
* test: resolve auth-choice cherry-pick conflict cleanup (#13472)
---------
Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
* feat: add LiteLLM provider types, env var, credentials, and auth choice
Add litellm-api-key auth choice, LITELLM_API_KEY env var mapping,
setLitellmApiKey() credential storage, and LITELLM_DEFAULT_MODEL_REF.
* feat: add LiteLLM onboarding handler and provider config
Add applyLitellmProviderConfig which properly registers
models.providers.litellm with baseUrl, api type, and model definitions.
This fixes the critical bug from PR #6488 where the provider entry was
never created, causing model resolution to fail at runtime.
* docs: add LiteLLM provider documentation
Add setup guide covering onboarding, manual config, virtual keys,
model routing, and usage tracking. Link from provider index.
* docs: add LiteLLM to sidebar navigation in docs.json
Add providers/litellm to both English and Chinese provider page lists
so the docs page appears in the sidebar navigation.
* test: add LiteLLM non-interactive onboarding test
Wire up litellmApiKey flag inference and auth-choice handler for the
non-interactive onboarding path, and add an integration test covering
profile, model default, and credential storage.
* fix: register --litellm-api-key CLI flag and add preferred provider mapping
Wire up the missing Commander CLI option, action handler mapping, and
help text for --litellm-api-key. Add litellm-api-key to the preferred
provider map for consistency with other providers.
* fix: remove zh-CN sidebar entry for litellm (no localized page yet)
* style: format buildLitellmModelDefinition return type
* fix(onboarding): harden LiteLLM provider setup (#12823)
* refactor(onboarding): keep auth-choice provider dispatcher under size limit
---------
Co-authored-by: Peter Steinberger <steipete@gmail.com>
* fix(ollama): add streaming config and fix OLLAMA_API_KEY env var support
Adds configurable streaming parameter to model configuration and sets streaming
to false by default for Ollama models. This addresses the corrupted response
issue caused by upstream SDK bug badlogic/pi-mono#1205 where interleaved
content/reasoning deltas in streaming responses cause garbled output.
Changes:
- Add streaming param to AgentModelEntryConfig type
- Set streaming: false default for Ollama models
- Add OLLAMA_API_KEY to envMap (was missing, preventing env var auth)
- Document streaming configuration in Ollama provider docs
- Add tests for Ollama model configuration
Users can now configure streaming per-model and Ollama authentication
via OLLAMA_API_KEY environment variable works correctly.
Fixes#8839
Related: badlogic/pi-mono#1205
* docs(ollama): use gpt-oss:20b as primary example
Updates documentation to use gpt-oss:20b as the primary example model
since it supports tool calling. The model examples now show:
- gpt-oss:20b as the primary recommended model (tool-capable)
- llama3.3 and qwen2.5-coder:32b as additional options
This provides users with a clear, working example that supports
OpenClaw's tool calling features.
* chore: remove unused vi import from ollama test
Both English and Chinese documentation had incorrect configuration template
using 'fallback' instead of 'fallbacks' in agents.defaults.model config.
Co-authored-by: damaozi <1811866786@qq.com>