Skip to main content
Back to Blog
14 min read

We Built a Desktop AI App with Tauri 2 Instead of Electron

Tauri 2 gave us a 15MB app instead of 200MB, native terminals, and encrypted storage. But the trade-offs are real.

taurielectron-alternativedesktop-apprustnative-appdeveloper-tools
CT

ChatML Team

ChatML Team


Last month, we started building ChatML -- a native macOS app that orchestrates parallel AI coding sessions using git worktrees. The full story of how we built it -- 750+ pull requests, all written by AI -- is worth reading for context. The first architectural decision we had to make was the one every desktop app team faces in 2026: Electron or something else.

We chose Tauri 2. It was the right call for our specific situation, but it wasn't an obvious one, and the trade-offs have been real. This is the honest field report. If you're evaluating Tauri 2 vs Electron for a production desktop app in 2026, this is what you need to know. What worked, what bit us, and what we'd tell another engineering team evaluating Tauri 2 vs Electron for their own project.

Why Tauri 2 (and what we gave up)

The pitch for Tauri is straightforward. Instead of bundling an entire Chromium browser and Node.js runtime with your app (which is what Electron does), Tauri uses the operating system's native webview -- WebKit on macOS, WebView2 on Windows -- and a Rust backend process. The result: dramatically smaller app bundles and lower resource usage.

The numbers for ChatML tell the story. Our Tauri desktop app ships at roughly 15MB. An equivalent Electron app would start at 150-200MB before you add any application code, because Chromium alone is about 120MB. On memory, ChatML's base footprint sits around 80-120MB of RAM. Electron apps routinely consume 300-500MB just idling, because you're running a full browser engine.

For a developer tool that runs alongside VS Code, a browser with thirty tabs, Docker, and whatever else engineers keep open, that resource difference matters. We're not building a standalone consumer app that has the machine to itself. We're building something that has to coexist quietly with a developer's entire workflow.

Tauri 2 (ChatML)Electron
Bundle size~15 MB150-200 MB
RAM at idle80-120 MB300-500 MB
Backend languageRust + Go sidecarNode.js
Rendering engineSystem WebView (WebKit/WebView2)Bundled Chromium
Native OS APIsDirect via RustVia Node.js addons
Encrypted storageStronghold (built-in)Requires addon
Ecosystem sizeGrowing (100s of plugins)Mature (1000s of packages)

What we gave up is not trivial. We lost access to Node.js APIs in the main process, which means no fs, no child_process, no net -- none of the server-side Node.js ecosystem that Electron apps get for free. We lost the massive Electron plugin ecosystem. We lost the ability to use Chrome DevTools in production builds for debugging. And we lost the guarantee that our webview behaves identically across platforms, since WebKit and Chromium have subtle rendering and API differences.

We accepted those trade-offs because of what we gained: a Rust backend with native OS integration, dramatically lower resource usage, and a smaller attack surface. For the full picture of why we ended up with four different languages in one app, see our architecture writeup.

The sidecar pattern

Here's the thing about Tauri that isn't immediately obvious: the native process is Rust. That's great for system-level work, but our application backend is written in Go. The session orchestration, worktree lifecycle, git operations, AI agent management -- all Go. We weren't going to rewrite that in Rust, and we didn't need to.

Tauri has a first-class concept called sidecars: external binaries that get bundled with the app and managed as child processes. On launch, Tauri spawns our Go binary as a sidecar. The Go process starts its HTTP and WebSocket servers on localhost. The React frontend communicates with the Go backend over those local connections, and the Rust layer handles everything that needs native OS access.

[React UI] <--WebSocket/HTTP--> [Go Backend (sidecar)] <---> [Claude Agent SDK]
     |
     +--- [Rust/Tauri] --- native OS integration, PTY, file watching

The sidecar binary is compiled for the target architecture -- arm64 for Apple Silicon, x64 for Intel Macs. Tauri's build system handles bundling the correct binary for each target. On startup, the Rust process spawns the Go sidecar, waits for it to signal readiness via a health check endpoint, and then loads the frontend.

This pattern works well. The Go backend doesn't know or care that it's running inside a Tauri app. It's just a server binary. We can develop and test it independently, run it in a terminal during development, and deploy it as a standalone CLI if we want to. The coupling between the Rust shell and the Go backend is intentional and minimal: process lifecycle and a localhost network boundary.

If you're curious about the streaming architecture between these layers, we covered the WebSocket design in detail in our streaming architecture post.

PTY terminal integration

ChatML provides a built-in terminal for each session. When the AI agent runs shell commands -- installing dependencies, running tests, executing migrations -- you see the output in real time, rendered in the app. This isn't a fake terminal emulation. It's a real PTY.

PTY stands for pseudo-terminal. It's the Unix abstraction that gives a process a terminal interface -- the same thing that iTerm2 or Terminal.app creates when you open a new tab. We allocate PTYs on the Rust side using native macOS APIs. Each session gets its own PTY, which means each AI agent has its own isolated terminal environment with its own shell, its own environment variables, its own working directory.

The PTY output stream -- raw bytes including ANSI escape codes for colors, cursor movement, bold text, all of it -- gets captured by the Rust process and forwarded to the React frontend over a WebSocket channel. The frontend uses a terminal emulator component (xterm.js) to render the output exactly as it would appear in your native terminal. Colors work. Progress bars work. Interactive prompts work (though the AI agent rarely needs them).

The challenge was edge cases. Terminal resize events need to propagate from the React UI through the Rust layer to the PTY so that programs like top or vim (if the agent opens them) render correctly. Different macOS versions have slightly different PTY behaviors. Shell initialization files (.zshrc, .bash_profile) can do surprising things when they detect they're running in a non-standard terminal. We spent more time on PTY reliability than we expected, but the result is a terminal experience that developers trust because it behaves like the terminals they already use.

Encrypted credential storage

Users store API keys in ChatML -- their Claude API key, GitHub tokens, and other service credentials. Getting this wrong would be a deal-breaker for a developer tool, so we invested heavily in doing it right.

Tauri provides a plugin called Stronghold, which is a purpose-built encrypted storage system. Here's how it works in ChatML:

  1. Key derivation: The encryption key is derived using Argon2id, a memory-hard key derivation function designed to resist brute-force attacks. The input to the KDF comes from the macOS system keychain, which means the encryption key is tied to the user's macOS account.
  2. Storage: Credentials are stored in a binary vault file -- a Stronghold snapshot. This file is encrypted at rest. It's not JSON, not YAML, not a SQLite database. It's a custom binary format designed specifically for secret storage.
  3. Access: When ChatML needs a credential (for example, to initialize a Claude agent), the Rust layer reads it from the Stronghold vault, decrypts it in memory, and passes it to the Go sidecar. The plaintext credential exists only in process memory, never on disk.

What this means in practice: if someone copies your ChatML data directory, they get an encrypted binary file they can't read. If they examine environment variables, there's nothing there. If they grep your filesystem for API keys, they won't find them. The credentials are locked to your macOS user account via the keychain integration.

This is one area where Tauri's Rust foundation pays dividends. Stronghold is written in Rust, with careful memory handling to minimize the window during which decrypted secrets exist in memory. Achieving equivalent security in Electron would require either a native Node.js addon or shelling out to a system keychain tool, both of which add complexity and potential failure modes.

Deep links for OAuth

GitHub integration is core to ChatML -- creating branches, opening PRs, reviewing diffs. The OAuth flow requires redirecting the user's browser to GitHub, having them authorize the app, and then redirecting back to ChatML with an authorization code.

That "redirect back" step is where desktop apps get complicated. There's no URL to redirect to. You're not a website.

Tauri 2's deep link plugin solves this by registering a custom URL scheme. We register chatml:// as a protocol handler on macOS. When the OAuth flow completes, GitHub redirects the browser to chatml://auth/callback?code=..., and macOS routes that URL to our app. The Rust layer intercepts it, extracts the authorization code, and passes it to the Go backend to complete the token exchange.

The implementation required handling a few non-obvious cases. The app might not be focused when the redirect arrives. The user might have multiple instances running (we prevent this, but we need to handle the attempt gracefully). There's a race condition window where the redirect could arrive before the app has finished initializing its OAuth state -- we handle this with a short-lived in-memory queue that buffers deep link events until the app is ready to process them.

URL scheme registration on macOS happens at install time via the Info.plist. Tauri's build system handles this, but we had to verify that scheme registration survives app updates, which it does as long as the bundle identifier doesn't change.

File watching at scale

Each ChatML session monitors its worktree for file changes. When the AI agent creates a file, modifies a module, or deletes a test fixture, the UI needs to reflect that change immediately. This means running recursive file system watchers on each active worktree directory.

With five concurrent sessions -- a typical workload for a productive morning -- that's five recursive watchers, each monitoring a full project directory tree. On a medium-sized project with 10,000 files, that's 50,000 watched paths.

On macOS, this is manageable because FSEvents (the native file system event API) is efficient at recursive watching. It operates at the directory level, not the file level, and the kernel coalesces events. Tauri's file watcher plugin wraps FSEvents and provides a Rust API for registering watchers and receiving events.

We added aggressive debouncing -- 100ms by default -- to prevent event storms. When the AI agent writes a file, saves it, and the build tool immediately recompiles, we might get three or four events for what is logically one change. Debouncing collapses these into a single UI update.

The harder problem is filtering. We need to ignore node_modules, .git directories, dist folders, build artifacts, and whatever else is in the project's .gitignore. We parse the .gitignore file and apply its rules to incoming events, which sounds simple but gets complicated with nested .gitignore files, negation patterns, and directory-only rules.

We also hit edge cases with git worktree internal files. When git updates its worktree metadata (files inside .git that track worktree state), those changes would trigger our watcher and cause unnecessary UI updates. We added explicit exclusions for git internal paths, which eliminated the noise.

Symlinks were another source of surprises. If a project uses symlinked directories (common in monorepos), FSEvents can report changes using either the symlink path or the real path, depending on how the change was triggered. We normalize all paths to their real (resolved) paths before processing events.

Content Security Policy

Tauri enforces a strict Content Security Policy by default. This is a good security practice -- it prevents XSS attacks, restricts what the webview can load, and limits script execution to trusted sources. But it caused friction during development that we didn't anticipate.

Three specific issues:

WebSocket connections to localhost. Our React frontend connects to the Go backend over WebSocket on ws://localhost:{port}. The default CSP blocks WebSocket connections. We had to explicitly allow connect-src ws://localhost:* http://localhost:* in the Tauri configuration.

Dynamic style injection. Our syntax highlighting library injects CSS dynamically to apply code themes. The default CSP blocks inline styles. We had to add style-src 'unsafe-inline' which is less restrictive than we'd like, or switch to a build-time CSS extraction approach. We opted for 'unsafe-inline' with a plan to revisit.

Local asset loading. Loading images, fonts, and other assets from the app bundle requires configuring the CSP to allow the Tauri asset protocol (asset://localhost). This wasn't well-documented when we started, and we spent time debugging blank images before finding the right incantation.

The net result is that our CSP is more carefully considered than it would have been in Electron, where the defaults are more permissive. Tauri's strict defaults forced us to think about every external connection, every dynamic resource, every script source. The app is more secure for it. But the developer experience of figuring out why something silently fails because of CSP -- with no error in the console because the console is the webview's inspector, which behaves differently from Chrome DevTools -- was genuinely frustrating.

What Tauri still needs

We shipped ChatML on Tauri 2 and we'd make the same choice again. But we'd be dishonest if we didn't catalog the rough edges.

The ecosystem is smaller. Electron has thousands of community packages. Need screen recording? There's an Electron package. Need system tray with custom menus? Multiple options. Tauri's plugin ecosystem is growing, but it's maybe a tenth of Electron's. For common needs (file dialogs, notifications, clipboard) Tauri's official plugins are solid. For niche requirements, you're writing Rust yourself.

Documentation has gaps. The core documentation is good, but advanced use cases -- particularly around sidecar management, custom protocol handlers, and multi-window setups -- required reading source code and GitHub issues. The Tauri Discord is active and helpful, but "ask on Discord" shouldn't be a documentation strategy.

Some plugins are still catching up to v2. Tauri 2 was a significant rewrite from Tauri 1, and not all community plugins have been updated. We encountered abandoned Tauri 1 plugins that looked perfect for our needs but didn't work with Tauri 2. In each case, we either found an alternative or wrote the functionality ourselves in Rust.

Auto-update requires careful signing. macOS requires code signing and notarization for auto-update to work reliably. Tauri's updater plugin handles the mechanics, but the signing setup -- developer certificates, provisioning profiles, notarization workflows -- is manual and poorly documented. We spent two full days getting auto-update working reliably across both Intel and Apple Silicon Macs.

Stack Overflow coverage is thin. When you hit an obscure error in Electron, there's usually a Stack Overflow answer from 2019 that's still relevant. With Tauri, you're often the first person to encounter a specific issue, or at least the first to document it publicly. This is getting better as adoption grows, but it's a real productivity cost today.

If you need Chrome-specific APIs, use Electron. WebKit (Safari's engine) doesn't support every Web API that Chrome does. We haven't hit a blocker, but we've worked around WebKit limitations in areas like ResizeObserver timing and Clipboard API behavior. If your app depends on Chrome-specific Web APIs or extensions, Electron is the pragmatic choice.

The honest recommendation

If you're building a developer tool, a utility, or any desktop app where resource efficiency matters and you're comfortable with Rust (or willing to learn it), Tauri 2 is a serious contender. The 15MB bundle, the low memory footprint, the Rust backend for native integration -- these aren't just nice numbers. They translate to an app that developers don't resent having open.

If you need the full Node.js ecosystem in your main process, if your team has deep Electron experience, or if you're targeting platforms where WebView2 or WebKit coverage is uncertain, Electron remains the safer bet. It's battle-tested, extensively documented, and has a solution for almost every problem you'll encounter.

For ChatML, the decision came down to this: we're building a tool that runs all day on a developer's machine, alongside their IDE, their browser, their Docker containers, and whatever else they need. Every megabyte of RAM we consume is a megabyte stolen from their actual work. Tauri 2 let us build a capable, native-feeling desktop app that's a good citizen on the developer's machine. The trade-offs were real, but they were the right trade-offs for our specific situation.

ChatML is GPL-3.0 licensed and open source. If you want to see what a production Tauri 2 app looks like in practice -- the sidecar pattern, PTY integration, encrypted storage, all of it -- the code is there.

Download ChatML and see for yourself.

Want to try ChatML?

Download ChatML