LAN Setup
Connect your CLI to the LMX inference server running on your local network. This page walks you through configuring the host address, setting up API keys, and verifying the connection.
Overview
The Opta CLI communicates with LMX over your local network using HTTP. LMX exposes an OpenAI-compatible API on a configurable port (default 1234). You need to tell the CLI where to find this server.
Configure LMX Host
Point the CLI at your LMX server by setting the host IP and port:
Set the LMX host address
Replace the IP with your LMX server's local address. This is typically your Mac Studio's static IP.
opta config set connection.host 192.168.188.11Set the port (if non-default)
The default port is 1234. Only change this if your LMX instance is configured differently.
opta config set connection.port 1234Confirm the configuration
opta config get connectionconnection.host 192.168.188.11 connection.port 1234
API Key Setup
LMX supports optional API key authentication. If your LMX instance requires a key, generate one and configure the CLI to use it.
Generate an API key
Creates a new API key and stores it securely in your system keychain.
opta key createAPI key created and stored in keychain. Key ID: opta_key_a1b2c3d4 Host: 192.168.188.11:1234
Verify the key is stored
opta key listID Host Created opta_key_a1b2c3d4 192.168.188.11:1234 2026-03-01
Failover Hosts
If you have multiple LMX instances (for example, a Mac Studio and a Mac Pro), you can configure failover hosts. The CLI will try each host in order until one responds.
{
"connection": {
"host": "192.168.188.11",
"port": 1234,
"failover": [
{ "host": "192.168.188.12", "port": 1234 },
{ "host": "192.168.188.13", "port": 1234 }
],
"timeout": 5000
}
}The CLI performs a lightweight health check against each host. If the primary host does not respond within the configured timeout (default 5 seconds), it automatically falls through to the next host in the list.
SSH Configuration
For remote operations like model management on the LMX host, configure SSH access. This allows the CLI to run administrative commands on the inference server directly.
Host lmx-studio
HostName 192.168.188.11
User matt
IdentityFile ~/.ssh/id_ed25519
ForwardAgent yesThen configure the CLI to use this SSH alias:
opta config set connection.sshAlias lmx-studioVerify Connection
With everything configured, verify the full connection path:
Check LMX connectivity
The status command pings the LMX health endpoint and reports the result.
opta statusCLI: v1.0.0 Daemon: stopped LMX: connected (192.168.188.11:1234) Model: Qwen3-30B-A3B (loaded) VRAM: 42.1 / 192.0 GB
Run a quick health check
The doctor command now validates the LMX connection alongside other checks.
opta doctorOpta Doctor ----------- Node.js v22.12.0 ok npm 10.9.0 ok Config dir ~/.config/opta ok Daemon not running (start with: opta daemon start) LMX host 192.168.188.11 ok LMX health 200 OK ok LMX model Qwen3-30B-A3B loaded
Test inference directly
Send a quick test prompt to confirm end-to-end inference works.
opta chat --once "Say hello in one sentence."Hello! I'm your local AI assistant, running entirely on your hardware.
opta status and opta chat --once commands can connect to LMX directly. The daemon adds session persistence, permissions, and tool orchestration on top.Troubleshooting
Connection refused
If opta status shows "connection refused", verify that:
- LMX is running on the target host (
systemctl status opta-lmxor check the process) - The IP address and port are correct (
opta config get connection) - No firewall is blocking port 1234 on the LMX host
- Both machines are on the same LAN subnet
Timeout errors
If connections succeed but are slow or intermittent, increase the timeout:
opta config set connection.timeout 10000DNS resolution issues
Always use IP addresses rather than hostnames for LMX connections. mDNS hostname resolution can be unreliable across different macOS versions and network configurations.