✨ Key Features
- Flexible AI Provider Selection: Choose between Anthropic, Google Gemini, Groq, or Ollama directly at startup.
- Anthropic-Compatible Proxy: Seamlessly integrate Gemini, Groq, and Ollama through a local proxy, maintaining a consistent terminal experience.
- Consistent User Experience: Designed to feel like one unified tool, with unified launcher, prompts, environment variables, and documentation.
- Local Inference Support: Best-in-class support for Ollama, enabling local inference without relying on cloud API providers.
🛠️ Tech Stack & Supported Providers
Claw Dev integrates with various AI models and services, utilizing a Node.js-based client with an internal compatibility proxy.
| Component / Provider | Technology / Integration |
|---|---|
| Core Client | |
| Anthropic | Direct integration via account login or `ANTHROPIC_API_KEY` |
| Google Gemini | Integrated via local Anthropic-compatible proxy, requires `GEMINI_API_KEY` |
| Groq | Integrated via local Anthropic-compatible proxy, requires `GROQ_API_KEY` |
| Ollama | Integrated via local Anthropic-compatible proxy, supports local or remote Ollama servers (e.g., `qwen3`) |
🚀 Installation & Usage
1. Requirements
Ensure you have the following installed:
- Node.js: Version 22 or newer.
- npm: Node Package Manager (comes with Node.js).
- Git for Windows: Recommended for Windows users for optimal terminal workflow.
Provider-specific requirements:
- Anthropic: An Anthropic account for in-app login, or `ANTHROPIC_API_KEY`.
- Gemini: `GEMINI_API_KEY`.
- Groq: `GROQ_API_KEY`.
- Ollama: A running Ollama installation with at least one pulled model (e.g., `qwen3`).
2. Installation Steps
Clone the repository:
git clone https://github.com/akariwill/Claw-Dev.git
From the repository root:
cd E:\Claw-Dev # Adjust path if different
npm install
copy .env.example .env
Note: Editing .env is optional. Claw Dev can prompt for missing values interactively when it starts.
3. Quick Start
Start Claw Dev from the repository root:
npm run claw-dev
Alternatively, launch directly from the bundled client directory:
cd E:\Claw-Dev\claude-code # Adjust path if different
.\claw-dev.cmd
When Claw Dev starts, it presents a provider selector:
- Anthropic
- Gemini
- Groq
- Ollama
If a required API key is missing, Claw Dev will prompt for it.
🤖 Architecture Overview
Claw Dev operates in two primary modes to maintain a consistent terminal experience while supporting diverse model backends:
-
Anthropic Mode:
- The bundled client communicates directly with Anthropic APIs.
-
Compatibility Mode:
- The bundled client interacts with a local proxy (`src/anthropicCompatProxy.ts`).
- This local proxy translates Anthropic-style `/v1/messages` requests into native API calls for Gemini, Groq, or Ollama.
💡 How To Use Ollama With Claw Dev
Ollama offers local inference, ideal for those who prefer not to depend on cloud API providers.
1. Install Ollama
Install Ollama from its official download page: Ollama Downloads
Ensure the Ollama application or service is running post-installation.
2. Pull a Local Model
For a quick start, pull a lightweight model:
ollama pull qwen3
Verify model availability:
ollama list
3. Start the Ollama Server
If not already running in the background:
ollama serve
The default local API base URL is http://127.0.0.1:11434.
4. Start Claw Dev and Select Ollama
cd E:\Claw-Dev # Adjust path if different
npm run claw-dev
Then choose option 4. Ollama.
Claw Dev will route requests through its local compatibility proxy to your Ollama server.
5. Optional Environment Configuration for Ollama
Preconfigure Ollama mode in your .env file:
OLLAMA_BASE_URL=http://127.0.0.1:11434
OLLAMA_MODEL=qwen3
OLLAMA_API_KEY= # Not required for local Ollama on localhost
OLLAMA_KEEP_ALIVE=30m # Keeps model loaded, reduces warm-up time
OLLAMA_NUM_CTX=2048 # Controls prompt context size
OLLAMA_NUM_PREDICT=128 # Limits output length, can reduce latency
6. Verify Ollama Usage
To check which models are loaded and their processor usage:
ollama ps
To confirm the Claw Dev proxy health:
npm run proxy:compat
Then open http://127.0.0.1:8789/health in your browser. A JSON response with the active provider and model should appear when Ollama mode is configured.
7. Ollama Performance Tuning
Consider these points for optimal performance:
- Larger context windows and longer outputs generally lead to slower responses.
- First-token latency is usually highest on the initial request after a model loads.
- CPU-only inference is significantly slower than GPU-backed inference.
Recommended starting values for responsiveness:
OLLAMA_KEEP_ALIVE=30m
OLLAMA_NUM_CTX=2048
OLLAMA_NUM_PREDICT=128
Adjust OLLAMA_NUM_CTX for quality vs. speed. Reduce OLLAMA_NUM_PREDICT for shorter answers and lower latency. If ollama ps shows 100% CPU, slow generation is expected; consider smaller models, optimizing OLLAMA_NUM_CTX, OLLAMA_NUM_PREDICT, and leveraging OLLAMA_KEEP_ALIVE.
⚙️ Recommended Environment Variables
Configure your .env file with the appropriate API keys and model names:
Anthropic
ANTHROPIC_API_KEY=your_anthropic_api_key_here
ANTHROPIC_MODEL=claude-sonnet-4-20250514
Gemini
GEMINI_API_KEY=your_gemini_api_key_here
GEMINI_MODEL=gemini-2.5-flash
Groq
GROQ_API_KEY=your_groq_api_key_here
GROQ_MODEL=openai/gpt-oss-20b
Ollama
OLLAMA_BASE_URL=http://127.0.0.1:11434
OLLAMA_MODEL=qwen3
OLLAMA_API_KEY=
OLLAMA_KEEP_ALIVE=30m # Keeps model loaded, reduces warm-up time
OLLAMA_NUM_CTX=2048 # Controls prompt context size
OLLAMA_NUM_PREDICT=128 # Limits output length, can reduce latency
CLI Useful Commands
Check Version
cd E:\Claw-Dev\claude-code # Adjust path if different
.\claw-dev.cmd --version
Force Specific Provider
Skip the provider menu:
.\claw-dev.cmd --provider anthropic
.\claw-dev.cmd --provider gemini
.\claw-dev.cmd --provider groq
.\claw-dev.cmd --provider ollama
Legacy aliases (--provider claude, --provider grok) are also supported.
One-Shot Prompt
echo "Summarize this repository" | .\claw-dev.cmd --bare -p
📂 Project Structure
claw-dev/
├── claude-code/ # Bundled terminal client and Windows launchers
│ └── ...
├── src/anthropicCompatProxy.ts # Local Anthropic-compatible proxy for Gemini, Groq, Ollama
├── .env.example # Optional environment template for local setup
├── package.json # Root scripts for launching, building, and validating
└── ... # Other project files
🔒 Git Privacy Before Publishing
Before public commits, verify your local Git identity:
Recommended settings:
git config user.name "YOURUSERNAME"
git config user.email "YOUREMAIL"
Verify active values:
git config user.name
git config user.email
Important notes:
.env,node_modules,dist, and*.logfiles are ignored by.gitignore.- Always review
git statusbefore staging andgit diff --cachedbefore pushing.
⁉️ Troubleshooting
Ollama does not answer
- Ensure Ollama is installed and its service/app is running.
- Confirm
ollama serveis active if required. - Verify the selected model was pulled successfully.
- Check that
OLLAMA_BASE_URLin.envpoints to the correct server.
Ollama answers slowly
Common causes include CPU-only inference, excessively large models for your hardware, context windows that are too large, or long requested outputs.
Use ollama ps to inspect model loading. If PROCESSOR shows 100% CPU, slow generation is expected. Consider smaller models, optimizing OLLAMA_NUM_CTX, OLLAMA_NUM_PREDICT, and leveraging OLLAMA_KEEP_ALIVE.
Cloud providers work, but Ollama does not
This indicates Claw Dev is likely functioning, but your local Ollama server might be unreachable or lacks the requested model.
✅ Verification
Run these commands for useful checks:
npm run check
npm run build
npm run claw-dev -- --version
🔗 References
Official documentation and resources used for this setup: