OpenClaw on Mac M1

OpenClaw Configuration Troubleshooting Summary

Objective

Configure a functional local LLM agent for OpenClaw on a Mac with 16GB RAM. Result: The user requested to leave the system configured with gpt-oss:20b, which is currently unresponsive due to hardware memory limits.

Attempt History & Findings

We tested multiple models and configurations to find a balance between “Smart”, “Working”, and “Fits in RAM”.

AttemptModelConfigurationResultRoot Cause
1mistral:latestOllama (Completions)Partial SuccessHallucination. Model works and replies to code tasks, but reciting config files when asked “Who are you?” due to raw completion API format.
2mistral:latestOpenAI (Chat)CrashConfig Invalid. OpenClaw validator rejected the openai provider block config, even with fixes.
3deepseek-r1Ollama (Completions)FailureEmpty Bubble. Gateway runs, but model output is silent. Likely due to parser failing on <think> tags. Disabling reasoning did not fix it.
4gpt-oss:20bOllama (8k Context)FailureSoftware Block. OpenClaw refuses to run with context < 16,000.
5gpt-oss:20bOllama (16k Context)Hung / “Queue”Hardware Limit. 13GB Model + 16k Context exceeds 16GB physical RAM. System swaps heavily; messages stay in “Queue” indefinitely.

Current System State

  • Modelgpt-oss:20b
  • Context16384
  • Providerollama
  • Status: Process Running, Agent Stuck.

Recommendation for Future

To get a working system, you must resolve the “Catch-22” between Software Requirements and Hardware Limits:

  1. Use Mistral: It fits in RAM and satisfies software checks. Accepting the “config hallucination” is the only path to a functional agent on this specific hardware today.
  2. Upgrade Hardware: 32GB+ RAM would allow gpt-oss:20b to run smoothly.
  3. Wait for OpenClaw Update: A future version might support <think> tags (fixing DeepSeek) or relax the 16k context minimum (fixing GPT-OSS 8k).