OpenClaw Configuration Troubleshooting Summary
Objective
Configure a functional local LLM agent for OpenClaw on a Mac with 16GB RAM. Result: The user requested to leave the system configured with gpt-oss:20b, which is currently unresponsive due to hardware memory limits.
Attempt History & Findings
We tested multiple models and configurations to find a balance between “Smart”, “Working”, and “Fits in RAM”.
| Attempt | Model | Configuration | Result | Root Cause |
|---|---|---|---|---|
| 1 | mistral:latest | Ollama (Completions) | Partial Success | Hallucination. Model works and replies to code tasks, but reciting config files when asked “Who are you?” due to raw completion API format. |
| 2 | mistral:latest | OpenAI (Chat) | Crash | Config Invalid. OpenClaw validator rejected the openai provider block config, even with fixes. |
| 3 | deepseek-r1 | Ollama (Completions) | Failure | Empty Bubble. Gateway runs, but model output is silent. Likely due to parser failing on <think> tags. Disabling reasoning did not fix it. |
| 4 | gpt-oss:20b | Ollama (8k Context) | Failure | Software Block. OpenClaw refuses to run with context < 16,000. |
| 5 | gpt-oss:20b | Ollama (16k Context) | Hung / “Queue” | Hardware Limit. 13GB Model + 16k Context exceeds 16GB physical RAM. System swaps heavily; messages stay in “Queue” indefinitely. |
Current System State
- Model:
gpt-oss:20b - Context:
16384 - Provider:
ollama - Status: Process Running, Agent Stuck.
Recommendation for Future
To get a working system, you must resolve the “Catch-22” between Software Requirements and Hardware Limits:
- Use Mistral: It fits in RAM and satisfies software checks. Accepting the “config hallucination” is the only path to a functional agent on this specific hardware today.
- Upgrade Hardware: 32GB+ RAM would allow
gpt-oss:20bto run smoothly. - Wait for OpenClaw Update: A future version might support
<think>tags (fixing DeepSeek) or relax the 16k context minimum (fixing GPT-OSS 8k).