Platform View

Documentation is adapted for macOS.

Shell commands, launchd paths, and Apple runtime guidance

CLI Version Context

Installed CLI: 1.1.0 (latest compatibility profile)

Support Bundle

Capture a complete diagnostics artifact for support triage.

macos
OUT=~/Desktop/opta-support-$(date +%Y%m%d-%H%M%S).txt; { echo "# Opta Support Bundle"; echo "# Generated: $(date -u +%Y-%m-%dT%H:%M:%SZ)"; echo; opta doctor; echo; opta daemon status; echo; opta daemon logs --lines 200; echo; opta config list; } > "$OUT" && echo "Saved $OUT"
Support FAQ

Learn About

Deep workflow guides aligned to this documentation section.

Browse all Opta Learn guides

LAN Setup

Bind your CLI to the LMX inference endpoint on your local network. This guide covers host routing, credential handling, failover policy, and end-to-end connectivity validation.

Overview

The Opta CLI communicates with LMX over your local network using HTTP. LMX exposes an OpenAI-compatible API on a configurable port (default 1234). You need to tell the CLI where to find this server.

Network requirement
Your workstation and LMX host must be on the same LAN. Opta Local uses direct LAN connections for speed and privacy -- never route through Tailscale or other overlay networks for local inference.

Configure LMX Host

Point the CLI at your LMX server by setting the host IP and port:

1

Set the LMX host address

Replace the IP with your LMX server's local address. This is typically your dedicated Apple Silicon host's static IP.

Set the LMX inference server IP address
opta config set connection.host lmx-host.local
2

Set the port (if non-default)

The default port is 1234. Only change this if your LMX instance is configured differently.

Set the LMX port (default: 1234)
opta config set connection.port 1234
3

Confirm the configuration

opta config get connection
connection.host    lmx-host.local
connection.port    1234

API Key Setup

LMX supports optional API key authentication. If your LMX instance requires a key, generate one and configure the CLI to use it.

1

Generate an API key

Creates a new API key and stores it securely in your system keychain.

opta key create
API key created and stored in keychain.
Key ID: opta_key_a1b2c3d4
Host:   lmx-host.local:1234
2

Verify the key is stored

opta key list
ID                  Host                    Created
opta_key_a1b2c3d4   lmx-host.local:1234     2026-03-01
Keychain storage
API keys are stored in your system credential store (macOS Keychain or Windows Credential Manager), not in plain text config files. This keeps your credentials secure even if your config directory is synced or backed up.

Failover Hosts

If you have multiple LMX instances (for example, a dedicated Apple Silicon host and a high-memory Apple Silicon host), you can configure failover hosts. The CLI will try each host in order until one responds.

~/.config/opta/config.json
{
  "connection": {
    "host": "lmx-host.local",
    "port": 1234,
    "failover": [
      { "host": "lmx-backup-a.local", "port": 1234 },
      { "host": "lmx-backup-b.local", "port": 1234 }
    ],
    "timeout": 5000
  }
}

The CLI performs a lightweight health check against each host. If the primary host does not respond within the configured timeout (default 5 seconds), it automatically falls through to the next host in the list.

SSH Configuration

For remote operations like model management on the LMX host, configure SSH access. This allows the CLI to run administrative commands on the inference server directly.

~/.ssh/config
Host lmx-studio
    HostName lmx-host.local
    User matt
    IdentityFile ~/.ssh/id_ed25519
    ForwardAgent yes

Then configure the CLI to use this SSH alias:

Set the SSH host alias for remote model management
opta config set connection.sshAlias lmx-studio

Verify Connection

With everything configured, verify the full connection path:

1

Check LMX connectivity

The status command pings the LMX health endpoint and reports the result.

opta status
CLI:    v1.0.0
Daemon: stopped
LMX:    connected (lmx-host.local:1234)
  Model: Qwen3-30B-A3B (loaded)
  VRAM:  42.1 / 192.0 GB
2

Run a quick health check

The doctor command now validates the LMX connection alongside other checks.

opta doctor
Opta Doctor
-----------
  Node.js     v22.12.0           ok
  npm         10.9.0             ok
  Config dir  ~/.config/opta     ok
  Daemon      not running        (start with: opta daemon start)
  LMX host    lmx-host.local     ok
  LMX health  200 OK             ok
  LMX model   Qwen3-30B-A3B     loaded
3

Test inference directly

Send a quick test prompt to confirm end-to-end inference works.

opta chat --once "Say hello in one sentence."
Hello! I'm your local AI assistant, running entirely on your hardware.
Daemon not required for this step
You do not need to start the daemon to test LMX connectivity. The opta status and opta chat --once commands can connect to LMX directly. The daemon adds session persistence, permissions, and tool orchestration on top.

Troubleshooting

Connection refused

If opta status shows "connection refused", verify that:

  • LMX is running on the target host (launchctl list | grep opta.lmx or systemctl status opta-lmx)
  • The IP address and port are correct (opta config get connection)
  • No firewall is blocking port 1234 on the LMX host
  • Both machines are on the same LAN subnet

Timeout errors

If connections succeed but are slow or intermittent, increase the timeout:

Increase connection timeout to 10 seconds
opta config set connection.timeout 10000

DNS resolution issues

Prefer mDNS hostnames (for example lmx-host.local) when available, then keep a fixed IP fallback configured for constrained network environments. Opta components support both patterns.