CLI Reference

The Opta CLI is your primary interface to the Opta Local stack. It provides interactive AI chat, autonomous task execution, model management, session control, and daemon lifecycle commands -- all from your terminal.

Overview

Opta CLI connects to the local daemon which orchestrates sessions, manages permissions, and proxies requests to LMX for inference on Apple Silicon. You can use it for quick questions, deep coding sessions, or fully autonomous multi-step tasks.

opta --help
Usage: opta [command] [options]

Commands:
  chat          Start interactive AI conversation
  do            Run autonomous agent task
  daemon        Manage the Opta daemon
  config        View and edit configuration
  models        Manage LMX models
  sessions      List and manage sessions
  status        Show stack health
  doctor        Diagnose and fix issues

Core Commands

CommandDescription
opta chatStart an interactive AI conversation with streaming output
opta do "task"Run an autonomous agent loop that completes a task
opta daemonStart, stop, restart, or install the background daemon
opta configView and modify CLI configuration settings
opta modelsLoad, swap, browse, and inspect LMX models
opta sessionsList, view, export, and delete conversation sessions
opta statusDisplay health of daemon, LMX, and connected services
opta doctorDiagnose common issues and optionally apply fixes

Two Modes

The CLI operates in two fundamental modes that serve different workflows. Understanding when to use each is key to getting the most from Opta.

Interactive Chat

opta chat opens a persistent, conversational session. You type messages, the model streams back responses in real time, and you can ask follow-up questions within the same context. Tool calls (file reads, writes, commands) require your explicit approval before executing.

Start an interactive chat session
opta chat

Autonomous Do

opta do takes a natural-language task description and runs an agentic loop to completion. It auto-approves safe tool calls (file reads, searches) while still prompting for destructive operations (file writes, command execution). This mode is ideal for tasks like refactoring a module, writing tests, or generating documentation.

Run an autonomous task
opta do "Add unit tests for the auth module"
Choosing a mode
Use chat when you want to explore, iterate, and steer the conversation. Use do when you have a well-defined task and want the AI to execute it with minimal interruption.

Platform Support

Platform Compatibility

FeatureStatus
macOS (Apple Silicon)Complete
macOS (Intel)Complete
Linux (x86_64)Complete
Windows (WSL2)Partial
Windows (native)Planned
LMX requires Apple Silicon
The LMX inference server uses MLX and only runs on Apple Silicon Macs. The CLI and daemon work on all supported platforms and can connect to a remote LMX instance over LAN.

Global Flags

FlagDescription
--verbose, -vEnable verbose output and debug logging
--jsonOutput responses as JSON (useful for scripting)
--host <addr>Override the daemon host address
--versionPrint CLI version and exit