LAN Setup

Connect your CLI to the LMX inference server running on your local network. This page walks you through configuring the host address, setting up API keys, and verifying the connection.

Overview

The Opta CLI communicates with LMX over your local network using HTTP. LMX exposes an OpenAI-compatible API on a configurable port (default 1234). You need to tell the CLI where to find this server.

Network requirement
Your workstation and LMX host must be on the same LAN. Opta Local uses direct LAN connections for speed and privacy -- never route through Tailscale or other overlay networks for local inference.

Configure LMX Host

Point the CLI at your LMX server by setting the host IP and port:

1

Set the LMX host address

Replace the IP with your LMX server's local address. This is typically your Mac Studio's static IP.

Set the LMX inference server IP address
opta config set connection.host 192.168.188.11
2

Set the port (if non-default)

The default port is 1234. Only change this if your LMX instance is configured differently.

Set the LMX port (default: 1234)
opta config set connection.port 1234
3

Confirm the configuration

opta config get connection
connection.host    192.168.188.11
connection.port    1234

API Key Setup

LMX supports optional API key authentication. If your LMX instance requires a key, generate one and configure the CLI to use it.

1

Generate an API key

Creates a new API key and stores it securely in your system keychain.

opta key create
API key created and stored in keychain.
Key ID: opta_key_a1b2c3d4
Host:   192.168.188.11:1234
2

Verify the key is stored

opta key list
ID                  Host                    Created
opta_key_a1b2c3d4   192.168.188.11:1234     2026-03-01
Keychain storage
API keys are stored in your system keychain (macOS Keychain or libsecret on Linux), not in plain text config files. This keeps your credentials secure even if your config directory is synced or backed up.

Failover Hosts

If you have multiple LMX instances (for example, a Mac Studio and a Mac Pro), you can configure failover hosts. The CLI will try each host in order until one responds.

~/.config/opta/config.json
{
  "connection": {
    "host": "192.168.188.11",
    "port": 1234,
    "failover": [
      { "host": "192.168.188.12", "port": 1234 },
      { "host": "192.168.188.13", "port": 1234 }
    ],
    "timeout": 5000
  }
}

The CLI performs a lightweight health check against each host. If the primary host does not respond within the configured timeout (default 5 seconds), it automatically falls through to the next host in the list.

SSH Configuration

For remote operations like model management on the LMX host, configure SSH access. This allows the CLI to run administrative commands on the inference server directly.

~/.ssh/config
Host lmx-studio
    HostName 192.168.188.11
    User matt
    IdentityFile ~/.ssh/id_ed25519
    ForwardAgent yes

Then configure the CLI to use this SSH alias:

Set the SSH host alias for remote model management
opta config set connection.sshAlias lmx-studio

Verify Connection

With everything configured, verify the full connection path:

1

Check LMX connectivity

The status command pings the LMX health endpoint and reports the result.

opta status
CLI:    v1.0.0
Daemon: stopped
LMX:    connected (192.168.188.11:1234)
  Model: Qwen3-30B-A3B (loaded)
  VRAM:  42.1 / 192.0 GB
2

Run a quick health check

The doctor command now validates the LMX connection alongside other checks.

opta doctor
Opta Doctor
-----------
  Node.js     v22.12.0           ok
  npm         10.9.0             ok
  Config dir  ~/.config/opta     ok
  Daemon      not running        (start with: opta daemon start)
  LMX host    192.168.188.11     ok
  LMX health  200 OK             ok
  LMX model   Qwen3-30B-A3B     loaded
3

Test inference directly

Send a quick test prompt to confirm end-to-end inference works.

opta chat --once "Say hello in one sentence."
Hello! I'm your local AI assistant, running entirely on your hardware.
Daemon not required for this step
You do not need to start the daemon to test LMX connectivity. The opta status and opta chat --once commands can connect to LMX directly. The daemon adds session persistence, permissions, and tool orchestration on top.

Troubleshooting

Connection refused

If opta status shows "connection refused", verify that:

  • LMX is running on the target host (systemctl status opta-lmx or check the process)
  • The IP address and port are correct (opta config get connection)
  • No firewall is blocking port 1234 on the LMX host
  • Both machines are on the same LAN subnet

Timeout errors

If connections succeed but are slow or intermittent, increase the timeout:

Increase connection timeout to 10 seconds
opta config set connection.timeout 10000

DNS resolution issues

Always use IP addresses rather than hostnames for LMX connections. mDNS hostname resolution can be unreliable across different macOS versions and network configurations.