First Session

Now that the CLI is installed and connected to LMX, it is time to run your first AI session. This page covers interactive chat, autonomous task execution, permission handling, and session management.

Start a Chat

The simplest way to interact with your local AI is the opta chat command. This opens an interactive chat session in your terminal.

1

Launch interactive chat

This starts the daemon (if not already running) and opens a new chat session.

opta chat
2

Type your first prompt

Once the session is active, type a message and press Enter.

Chat session
> Explain the difference between let and const in TypeScript.

In TypeScript (and JavaScript), `let` and `const` are both block-scoped
variable declarations, but they differ in mutability:

- **const** declares a variable that cannot be reassigned after initialization.
  The binding is immutable, though object properties can still be modified.

- **let** declares a variable that can be reassigned. Use it when the value
  needs to change during execution.

Best practice: default to `const` and only use `let` when reassignment
is genuinely needed.
3

Continue the conversation

The session maintains full context. Follow-up messages reference the entire conversation history.

Follow-up
> Give me an example of when let is necessary.

A common case is loop counters:

for (let i = 0; i < items.length; i++) {
  // 'i' must be reassigned each iteration
  process(items[i]);
}

You cannot use `const` here because the value of `i` changes on each loop.
4

Exit the session

Press Ctrl+C or type /exit to end the chat session.

/exit
One-shot mode
For quick questions without an interactive session, use the --once flag:
opta chat --once "What is the capital of Australia?"
This prints the response and exits immediately.

Understanding Streaming

Responses stream token-by-token as the model generates them. You will see text appearing incrementally in your terminal rather than waiting for the complete response.

During streaming, the CLI shows:

  • Token output -- the response text, rendered as it arrives
  • Thinking indicators -- for reasoning models, a spinner or thinking block shows the model's internal reasoning before the final response
  • Turn statistics -- after each response, the CLI displays token count, generation speed (tokens/sec), and elapsed time
Turn statistics
--- Turn complete ---
Tokens: 147 (prompt: 52, completion: 95)
Speed:  41.2 tok/s
Time:   2.3s

Do Mode

While opta chat is conversational, opta do is action-oriented. It tells the AI to complete a specific task using available tools -- file operations, shell commands, code analysis, and more.

Run an autonomous task
opta do "Create a TypeScript function that validates email addresses using a regex, with unit tests"

In do mode, the AI will:

  1. Analyze the task and plan the approach
  2. Use tools to read existing files, create new files, and run commands
  3. Ask for permission before potentially destructive operations
  4. Report the results when the task is complete
Chat vs Do
Chat is for conversation, questions, and exploration. The model responds with text but does not take actions. Do is for task execution. The model actively uses tools to modify files, run commands, and complete objectives. Use chat when you want advice; use do when you want results.

Permission Prompts

When the AI wants to perform an action in do mode, the CLI prompts you for approval. This is the permission system -- it ensures no tool runs without your explicit consent.

Permission prompt
Tool: write_file
Path: src/utils/validate-email.ts
Content: [142 lines]

Allow this action? [y]es / [n]o / [a]lways for this tool: 

Your options at each permission prompt:

  • y (yes) -- approve this single invocation
  • n (no) -- deny this invocation; the AI will try an alternative approach
  • a (always) -- approve all future invocations of this tool for the current session
Safe tools auto-approve
Some read-only tools (like read_file and list_directory) are classified as safe and auto-approve without prompting. Write operations, shell commands, and destructive actions always require explicit approval.

Managing Sessions

Every chat and do interaction creates a session. Sessions store the full conversation history, tool invocations, and metadata. You can list, resume, and manage sessions after they end.

1

List recent sessions

opta sessions list
ID          Mode   Created              Turns  Title
a1b2c3d4    chat   2026-03-01 10:15:00  8      TypeScript let vs const
e5f6g7h8    do     2026-03-01 10:22:00  3      Email validation function
i9j0k1l2    chat   2026-02-28 16:40:00  12     React hook patterns
2

Resume a previous session

Continue a conversation from where you left off. The full context is restored.

opta chat --resume a1b2c3d4
3

View session details

Inspect metadata, token usage, and tool call history for a session.

opta sessions show a1b2c3d4
Session: a1b2c3d4
Mode:    chat
Created: 2026-03-01 10:15:00
Turns:   8
Tokens:  1,247 (prompt) + 892 (completion)
Tools:   0 invocations
Title:   TypeScript let vs const

Exporting Sessions

Sessions can be exported in multiple formats for sharing, archiving, or processing:

Export a session to Markdown
opta sessions export a1b2c3d4 --format markdown --output session.md

Supported export formats:

  • markdown -- human-readable Markdown document
  • json -- full session data including metadata and tool calls
  • text -- plain text transcript
Export full session data as JSON
opta sessions export a1b2c3d4 --format json --output session.json

Tips

Slash commands in chat
During an interactive chat session, you can use slash commands for quick actions without leaving the session:
  • /model -- switch the active model
  • /session -- view current session info
  • /debug -- toggle debug output
  • /help -- list all available slash commands
Model selection
By default, the CLI uses whatever model is currently loaded on LMX. To request a specific model:
opta chat --model qwen3-30b-a3b
If the requested model is not loaded, LMX will attempt to load it (unloading the current model if necessary to free VRAM).

You are now ready to use the full Opta Local stack. The next section covers the CLI reference in detail, including all available commands, configuration options, and slash commands.