Your local AI stack, ready in minutes
Download, set up, and start using Opta CLI and Opta LMX on your machine. No API keys. No cloud. Just fast, private inference.
Download your stack
Two tools. One local AI stack. Both free, both open.
Opta CLI
AI coding agent
The command-line interface for AI-powered coding. Ask it to build features, fix bugs, explain codebases, or write tests — all running locally on your machine.
Opta LMX
Local inference engine
Runs large language models on Apple Silicon via MLX. Serves a local OpenAI-compatible API endpoint so any app can use your private models — no cloud, no API key.
Be first to download
Drop your email and we’ll notify you the moment installers ship.
Set up in minutes
Step-by-step instructions for each tool. Pick your starting point.
Install Opta CLI
Download and run the installer for your platform, or install via Homebrew.
brew install optaops/tap/opta-cliConfigure your endpoint
Point Opta CLI at your local LMX server running at port 1234.
opta config set endpoint http://localhost:1234
opta config set model kimi-k2-5Run your first command
Ask Opta CLI to fix a bug, write a test, or explain a function.
opta "fix the authentication bug in src/auth.ts"Get the most out of your stack
Tips and workflows from power users running the full Opta stack.
Use Kimi K2.5 for coding
22 tok/s on M3 Ultra
Kimi K2.5 delivers the best results for code generation, debugging, and refactoring tasks. At 22 tokens per second on M3 Ultra, it feels instant — no more waiting for API responses.
Let the router decide
Smart model routing
Opta LMX includes an intelligent router that picks the right model for each request — fast models for simple queries, capable models for complex reasoning. Enable it with `opta-lmx router on`.
Batch overnight
Queue tasks while you sleep
Got a big refactor or test suite to write? Queue it before you sleep. Opta CLI supports job queuing so your models can work through long tasks while you're away — no API rate limits, no cost per token.
More tips in the setup guide and the Opta CLI docs.
Ready to manage your models?
Open the Opta Local dashboard to download models, monitor inference performance, configure routing, and manage your full local AI stack — all from a clean UI.
Open DashboardRequires Opta LMX running locally