Claw Dev

A multi-provider coding assistant launcher.

✨ Key Features

🛠️ Tech Stack & Supported Providers

Claw Dev integrates with various AI models and services, utilizing a Node.js-based client with an internal compatibility proxy.

Component / Provider Technology / Integration
Core Client Node.js TypeScript
Anthropic Direct integration via account login or `ANTHROPIC_API_KEY`
Google Gemini Integrated via local Anthropic-compatible proxy, requires `GEMINI_API_KEY`
Groq Integrated via local Anthropic-compatible proxy, requires `GROQ_API_KEY`
Ollama Integrated via local Anthropic-compatible proxy, supports local or remote Ollama servers (e.g., `qwen3`)

🚀 Installation & Usage

1. Requirements

Ensure you have the following installed:

Provider-specific requirements:

2. Installation Steps

Clone the repository:

git clone https://github.com/akariwill/Claw-Dev.git

From the repository root:

cd E:\Claw-Dev # Adjust path if different
npm install
copy .env.example .env

Note: Editing .env is optional. Claw Dev can prompt for missing values interactively when it starts.

3. Quick Start

Start Claw Dev from the repository root:

npm run claw-dev

Alternatively, launch directly from the bundled client directory:

cd E:\Claw-Dev\claude-code # Adjust path if different
.\claw-dev.cmd

When Claw Dev starts, it presents a provider selector:

  1. Anthropic
  2. Gemini
  3. Groq
  4. Ollama

If a required API key is missing, Claw Dev will prompt for it.

🤖 Architecture Overview

Claw Dev operates in two primary modes to maintain a consistent terminal experience while supporting diverse model backends:

  1. Anthropic Mode:
    • The bundled client communicates directly with Anthropic APIs.
  2. Compatibility Mode:
    • The bundled client interacts with a local proxy (`src/anthropicCompatProxy.ts`).
    • This local proxy translates Anthropic-style `/v1/messages` requests into native API calls for Gemini, Groq, or Ollama.

💡 How To Use Ollama With Claw Dev

Ollama offers local inference, ideal for those who prefer not to depend on cloud API providers.

1. Install Ollama

Install Ollama from its official download page: Ollama Downloads

Ensure the Ollama application or service is running post-installation.

2. Pull a Local Model

For a quick start, pull a lightweight model:

ollama pull qwen3

Verify model availability:

ollama list

3. Start the Ollama Server

If not already running in the background:

ollama serve

The default local API base URL is http://127.0.0.1:11434.

4. Start Claw Dev and Select Ollama

cd E:\Claw-Dev # Adjust path if different
npm run claw-dev

Then choose option 4. Ollama.

Claw Dev will route requests through its local compatibility proxy to your Ollama server.

5. Optional Environment Configuration for Ollama

Preconfigure Ollama mode in your .env file:

OLLAMA_BASE_URL=http://127.0.0.1:11434
OLLAMA_MODEL=qwen3
OLLAMA_API_KEY= # Not required for local Ollama on localhost
OLLAMA_KEEP_ALIVE=30m # Keeps model loaded, reduces warm-up time
OLLAMA_NUM_CTX=2048 # Controls prompt context size
OLLAMA_NUM_PREDICT=128 # Limits output length, can reduce latency

6. Verify Ollama Usage

To check which models are loaded and their processor usage:

ollama ps

To confirm the Claw Dev proxy health:

npm run proxy:compat

Then open http://127.0.0.1:8789/health in your browser. A JSON response with the active provider and model should appear when Ollama mode is configured.

7. Ollama Performance Tuning

Consider these points for optimal performance:

Recommended starting values for responsiveness:

OLLAMA_KEEP_ALIVE=30m
OLLAMA_NUM_CTX=2048
OLLAMA_NUM_PREDICT=128

Adjust OLLAMA_NUM_CTX for quality vs. speed. Reduce OLLAMA_NUM_PREDICT for shorter answers and lower latency. If ollama ps shows 100% CPU, slow generation is expected; consider smaller models, optimizing OLLAMA_NUM_CTX, OLLAMA_NUM_PREDICT, and leveraging OLLAMA_KEEP_ALIVE.

⚙️ Recommended Environment Variables

Configure your .env file with the appropriate API keys and model names:

Anthropic

ANTHROPIC_API_KEY=your_anthropic_api_key_here
ANTHROPIC_MODEL=claude-sonnet-4-20250514

Gemini

GEMINI_API_KEY=your_gemini_api_key_here
GEMINI_MODEL=gemini-2.5-flash

Groq

GROQ_API_KEY=your_groq_api_key_here
GROQ_MODEL=openai/gpt-oss-20b

Ollama

OLLAMA_BASE_URL=http://127.0.0.1:11434
OLLAMA_MODEL=qwen3
OLLAMA_API_KEY=
OLLAMA_KEEP_ALIVE=30m # Keeps model loaded, reduces warm-up time
OLLAMA_NUM_CTX=2048 # Controls prompt context size
OLLAMA_NUM_PREDICT=128 # Limits output length, can reduce latency

CLI Useful Commands

Check Version

cd E:\Claw-Dev\claude-code # Adjust path if different
.\claw-dev.cmd --version

Force Specific Provider

Skip the provider menu:

.\claw-dev.cmd --provider anthropic
.\claw-dev.cmd --provider gemini
.\claw-dev.cmd --provider groq
.\claw-dev.cmd --provider ollama

Legacy aliases (--provider claude, --provider grok) are also supported.

One-Shot Prompt

echo "Summarize this repository" | .\claw-dev.cmd --bare -p

📂 Project Structure

claw-dev/
├── claude-code/   # Bundled terminal client and Windows launchers
│   └── ...
├── src/anthropicCompatProxy.ts # Local Anthropic-compatible proxy for Gemini, Groq, Ollama
├── .env.example            # Optional environment template for local setup
├── package.json            # Root scripts for launching, building, and validating
└── ...                     # Other project files

🔒 Git Privacy Before Publishing

Before public commits, verify your local Git identity:

Recommended settings:

git config user.name "YOURUSERNAME"
git config user.email "YOUREMAIL"

Verify active values:

git config user.name
git config user.email

Important notes:

⁉️ Troubleshooting

Ollama does not answer

Ollama answers slowly

Common causes include CPU-only inference, excessively large models for your hardware, context windows that are too large, or long requested outputs.

Use ollama ps to inspect model loading. If PROCESSOR shows 100% CPU, slow generation is expected. Consider smaller models, optimizing OLLAMA_NUM_CTX, OLLAMA_NUM_PREDICT, and leveraging OLLAMA_KEEP_ALIVE.

Cloud providers work, but Ollama does not

This indicates Claw Dev is likely functioning, but your local Ollama server might be unreachable or lacks the requested model.

👋 Sharing With Another User

For the shortest setup path when sharing this repository:

  1. Install Node.js 22 or newer.
  2. Run npm install.
  3. Start npm run claw-dev.
  4. Choose a provider.
  5. Supply credentials or run Ollama locally.

A separate global installation of the bundled client is not required.

✅ Verification

Run these commands for useful checks:

npm run check
npm run build
npm run claw-dev -- --version

🔗 References

Official documentation and resources used for this setup: