ChatML v0.1.0: Open Source AI Development with Parallel Sessions
Download ChatML v0.1.0 free: open source macOS app for parallel AI coding sessions. Run multiple Claude agents in isolated git worktrees.
ChatML Team
ChatML Team
ChatML is a free, open source, native macOS application for AI-assisted software development. It runs multiple Claude sessions in parallel, each in an isolated git worktree, so you can build features, review code, and fix bugs simultaneously --- without any of them stepping on each other. It is GPL-3.0 licensed, local-first, and built with Go, React, Rust, and Node.js. Version 0.1.0 is available to download now.
What's in v0.1.0
This is the first public release. Everything listed here is shipping today.
-
Parallel AI sessions with isolated git worktrees. Each session gets its own branch and its own working directory. Agents cannot interfere with each other. Run as many as your machine can handle.
-
Real-time diff visualization across all sessions. Watch every file change as it happens. See exactly what each agent is writing, deleting, and modifying --- live, not after the fact.
-
Built-in terminal for each session. Every session has its own terminal scoped to its worktree. Run tests, install dependencies, inspect output --- all without leaving ChatML.
-
Plan mode for complex, multi-step changes. For larger tasks, ask the agent to plan first. Review the approach before any code gets written. Approve, revise, or redirect.
-
AI code review with inline comments. Point the agent at a diff or a PR and get line-by-line review comments. It reads the code, understands the context, and leaves feedback where it matters.
-
Streaming output from Claude in real time. No waiting for a full response to render. You see the agent's thinking and output as it streams, token by token. We built a custom WebSocket architecture to keep latency under 50ms.
-
Skills system. Commit, review PRs, run tests, create branches --- common development actions packaged as repeatable skills the agent can execute. More shipping soon.
-
Encrypted credential storage. Your API keys are stored encrypted on disk using your system keychain. They are never written in plain text, never logged, never sent anywhere except the Claude API.
-
GitHub OAuth integration for PR management. Authenticate once and manage pull requests directly from ChatML. Create, review, merge --- no browser tab required.
-
Keyboard-first interface with customizable shortcuts. Navigate sessions, trigger actions, and manage your workflow without reaching for the mouse. Shortcuts are remappable.
-
Native macOS app, approximately 15MB. Built with Tauri 2 instead of Electron. No bundled Chromium. No 300MB runtime. A real native app that respects your system resources.
-
GPL-3.0 licensed --- the whole thing. No enterprise tier. No premium features behind a paywall. No "open core" bait-and-switch. The entire application, every line of code, is GPL-3.0 licensed --- and copyleft means it stays that way. We wrote about why we made that choice.
Getting Started in 3 Steps
Step 1: Download. Grab the latest release from the download page. We ship universal macOS binaries --- Apple Silicon and Intel both supported.
Step 2: Add your Claude API key. On first launch, ChatML asks for your Anthropic API key. Enter it once. It gets stored encrypted in your system keychain. That is the only setup.
Step 3: Open a project, create a session, and go. Point ChatML at any git repository on your machine. Create a new session. Give it a task. The agent starts working.
Here is what it looks like in practice. You open your project and create a new session with the prompt "Add user authentication with JWT tokens." The agent creates a worktree, checks out a branch, and starts implementing. While it is working, you create a second session: "Write API integration tests for the search endpoint." That agent spins up in its own worktree and starts working too. Both run simultaneously. When they finish, you review two clean diffs on two clean branches and merge them independently.
That is the whole point. Parallel, not sequential. We wrote about the problem with sequential AI development on day one of this project. This release is our answer to it.
The Numbers
A few stats for the skeptics.
- ~15MB app size. The full application, installed and ready to run.
- Sub-50ms streaming latency. From the Claude API to your screen. The bottleneck is the network, not the app.
- 10+ concurrent sessions on a standard MacBook. We have tested with twelve on an M2 MacBook Pro with 16GB of RAM. Each session is a lightweight worktree and a WebSocket connection, not a heavy process.
- 0 bytes of your code sent anywhere except the Claude API. ChatML is local-first. Your code lives on your machine. The only network traffic is between you and Anthropic's API.
- 100% open source, GPL-3.0 licensed. Verify it yourself. The repository is public.
How ChatML Compares
| Feature | ChatML | Cursor | Claude Code | GitHub Copilot |
|---|---|---|---|---|
| Parallel AI sessions | Yes (unlimited) | No | No | No |
| Git worktree isolation | Yes (automatic) | No | No | No |
| Open source | Yes (GPL-3.0) | No | No | No |
| Real-time diff streaming | Yes | Partial | No | No |
| Built-in terminal per session | Yes | No | Yes (CLI) | No |
| AI code review | Yes (3 depths) | No | No | No |
| App size | ~15 MB | ~400 MB | CLI | Extension |
| Price | Free | $20/mo+ | Pay per token | $10/mo+ |
How We Got Here
We started building ChatML on January 17th because we were frustrated. Every AI coding tool we used --- Cursor, Claude Code, Copilot --- forced us into the same sequential workflow: give the agent a task, wait for it to finish, review, repeat. One task at a time. One agent at a time. We knew there had to be a better way.
We wrote about the problem first, before writing any code. Then we designed an architecture. It became clear quickly that no single language could handle everything we needed, so we built a polyglot stack --- Go for the backend, React for the UI, Rust via Tauri for the native shell, and Node.js for the agent runner. We went deep on git worktrees, the most underused feature in git and the foundation of our isolation model. We built a real-time streaming pipeline to get agent output to the UI with minimal latency. We chose Tauri 2 over Electron to keep the app small and fast. And we open-sourced everything under GPL-3.0 because we believe the best developer tools are built in the open.
This release is the result of six weeks of building. It is a v0.1.0, not a v1.0. There are rough edges. There are features missing. But the core workflow --- parallel AI sessions in isolated worktrees, with real-time visibility into what each agent is doing --- that works, and it works well.
What's Next
Here is what we are working toward. These are plans, not promises. We ship based on what the community tells us matters most.
- Windows and Linux support. macOS is first because Tauri 2 on macOS is the most mature. Windows and Linux builds are coming.
- Additional AI providers. OpenAI, Gemini, and local models via Ollama. Claude is excellent for agentic coding, but you should be able to choose.
- Team collaboration features. Shared sessions, shared worktree configurations, team-level settings. We are designing this carefully to avoid adding complexity for solo developers.
- More built-in skills. The skills system is extensible by design. Expect more out-of-the-box actions for common workflows.
- Plugin and extension system. A public API for building your own skills, integrations, and UI extensions. We want ChatML to be a platform, not just an app.
The changelog has the full list of what shipped in v0.1.0 and will track everything going forward.
How to Contribute
This is an open source project and we want your involvement. Here is how.
- Download ChatML and try it on your own projects. The best feedback comes from real usage.
- Star the repository on GitHub. It helps more people find the project.
- Join the community. File bugs, request features, ask questions. Every issue helps us prioritize.
- Pick up a "good first issue." We label beginner-friendly issues in the repository. If you want to contribute code, that is the place to start.
- Read the contributing guide in the repository for setup instructions, coding standards, and PR guidelines.
- Check out our workflow guide to see what is possible with parallel AI sessions and how to get the most out of ChatML.
We built this because we needed it. We open-sourced it because we think you might need it too. Version 0.1.0 is just the beginning.
Download ChatML and let us know what you think.
More from the blog
Want to try ChatML?
Download ChatML