OpenWork: The Open-Source AI Desktop Agent Taking on Claude Cowork
Ai ToolsTechnews

OpenWork: The Open-Source AI Desktop Agent Taking on Claude Cowork

4 min read

In this article

    Last month Anthropic quietly shipped Cowork — an agentic desktop feature that lets Claude handle real tasks on your computer. It got a wave of coverage. Everyone wrote about it.

    Meanwhile, a small team called different-ai pushed something to GitHub that does roughly the same thing, runs entirely on your machine, costs nothing if you want it to, and has been getting zero mainstream coverage. It’s called OpenWork.

    That gap is worth paying attention to.

    What the different-ai Team Actually Built

    OpenWork is a desktop agent app. You open it, connect a model — could be Claude via your own API key, could be GPT-4o, could be a local Mistral running through Ollama — and you have a working agent environment that can execute tasks, connect to external tools, and run workflows.

    It’s built on top of OpenCode, which handles the heavy lifting for agentic execution. OpenWork wraps that with a proper GUI, adds MCP (Model Context Protocol) server support, and gives you three ways to run it: as a standard desktop app, as a local server you can trigger from Slack or Telegram, or through messaging integrations.

    The bit that matters most: in Host Mode, the entire thing runs on 127.0.0.1. Nothing phones home.


    Technical Specs — What’s Actually Running Under the Hood

    This is the section most tech blogs skip. Here’s what OpenWork actually is at an architecture level.

    Runtime: Node.js (v18+), packaged via Electron for desktop. The agent core runs as a local HTTP server on a configurable port (default: 3000).

    Architecture type: Host Model — OpenWork spins up a contained workspace per session. The agent process lives inside that workspace, executes tasks, and shuts down cleanly when you close it. No persistent background process, no daemon running while you sleep.

    This is meaningfully different from tools like OpenClaw, which uses a Daemon architecture — an always-on background process that persists between sessions, monitors file changes, and can trigger autonomously. Daemon = more power, more resource use, always listening. Host = contained, session-scoped, lighter on RAM and CPU when idle.

    Neither is objectively better. Daemon architecture makes sense for CI-style automation where you want persistent watchers. Host architecture makes sense for on-demand workflows where you want predictable, controllable execution. OpenWork chose Host intentionally — it fits the “privacy-first, user-controlled” design philosophy.

    Model connections: Any OpenAI-compatible API endpoint. That covers Claude (via Anthropic API), GPT-4o, Gemini via proxy, and any local model served through Ollama or LM Studio.

    MCP support: Full MCP client. Connects to any MCP-compatible server via JSON config. Ships with no default servers — you bring your own.

    Storage: Local SQLite for session history. No telemetry. No analytics pings.


    Hardware Compatibility

    Running local models through Ollama changes your minimum requirements significantly versus cloud-API-only mode. Here’s what actually works:

    SetupMin RAMGPU RequiredRecommended ChipPerformance
    Cloud API only (Claude/GPT)4GBNoAny modern CPUFast — model runs remotely
    Ollama + Mistral 7B8GBNoM1 / Ryzen 5 5000+Good for most tasks
    Ollama + Llama 3 8B16GBRecommendedM2 / RTX 3060+Solid local performance
    Ollama + Llama 3 70B32GB+RequiredM2 Ultra / RTX 4090Fast only with strong GPU
    Ollama + CodeLlama 34B24GBRecommendedM3 Pro / RTX 3090Best for code tasks

    Apple Silicon note: M1/M2/M3 chips handle local models unusually well because the unified memory architecture lets the GPU and CPU share the same RAM pool. A MacBook Pro M2 with 16GB runs Llama 3 8B comfortably. On Windows/Linux you need dedicated VRAM — system RAM alone is significantly slower.

    The realistic minimum for most people: 8GB RAM, any chip made after 2020, cloud API mode. You don’t need a beast machine to get value from OpenWork. The local model path is optional, not required.


    Install It Right Now — Here’s How

    git clone https://github.com/different-ai/openwork
    cd openwork
    npm install
    npm run dev

    Or grab the pre-built desktop release from the GitHub releases page. First-run setup asks you to connect a model. Point it at Ollama for fully local operation, or paste an API key for Claude/OpenAI. Running in under five minutes.

    The Cowork Comparison Nobody Is Writing

    Anthropic’s Cowork works well. That’s worth saying. If you pay for Claude Max and you’re on macOS, it’s a polished experience backed by infrastructure that just works.

    But three hard constraints:

    Your data goes through Anthropic’s servers. Always. No local execution mode. For anyone under an NDA, handling client work, or building something genuinely proprietary, that’s not a minor footnote.

    You’re on one model. Claude. OpenWork lets you swap models per task — GPT-4o, local Llama, whatever fits.

    The price is $100–200 a month. OpenWork with Ollama is zero. With your own API key it runs well under $20 a month for most individual developers.

    Where It Falls Short — Honestly

    Documentation is patchy. Some README sections assume MCP familiarity that most non-developers don’t have. The active branch is dev not main — open issues exist, Windows support is rougher than macOS, and there’s no official support channel beyond GitHub discussions.

    This is a serious open-source project with product ambitions. Not a finished product.

    Why It Matters Beyond the Tool Itself

    Every major AI capability Anthropic, OpenAI, and Google ships gets open-sourced by someone within months. Cowork launched, an open-source equivalent appeared almost immediately. That pattern is accelerating.

    For developers, the gap between what you can afford and what frontier labs charge is narrowing fast. OpenWork today is rough. In six months, with community contributions and clear user demand for local-first AI tooling, it gets harder to ignore.

    Rohit

    Rohit Kumar is an experienced tech expert and content creator who simplifies technology. Through his website, he provides insightful articles, practical tips, and expert analysis on mobile specs, PC/laptop news, and how-to guides, empowering users to make informed tech decisions.

    View all posts →

    Leave a Comment

    No comments yet. Be the first to share your thoughts!