LLM Command Line Interface

Interact with our language models directly from your terminal with simple, powerful commands.

Start Using CLI

Installation

macOS/Linux:

curl https://llm.org/install | sh

Installs via shell script (may require sudo)

Homebrew:

brew install llm

Official macOS package manager

Windows:

winget install llm-cli

Available through Windows Package Manager

Getting Started

1

Set API Key

Either set environment variable or use --key param
export LLM_API_KEY=your_api_key
2

Basic Command

Generate text from a prompt
llm generate "The future of AI will"
3

View Output

Formatted output with tokens stats
The future of AI will be shaped by responsible development and widespread adoption.

PROMPT TOKENS: 7 • COMPLETION: 23 • TOTAL: 30

Core Commands

Text Generation

llm generate "Once upon a time" Default model
llm generate --model gpt4 "Explain quantum physics" Use specific model

Embeddings

llm embed "My query" Get vector representation
llm search --query "AI ethics" --database mydocs Similarity search
⚙️

Configuration

llm config --key your_key --default-model gpt4

📊

Usage Stats

llm stats

📝

History

llm history

Configuration Options

Model Selection
--model gpt3/gpt4/llama3
Temperature
--temperature 0.5-2.0
Max Tokens
--max-tokens 100-4096
Top Probability
--top-p 0.1-1.0
Repetition Penalty
--repetition-penalty 1.0-2.0
Output Format
--format text/json

Best Practice Examples

Summarize Text

llm generate --prompt "Summarize: [long article text]" --length 200

Translate Text

llm generate "Translate: Hello world" --language es

Streaming Response

llm generate "The future of AI is" --stream | while read -r line; do echo "$line" done
Use streaming for real-time responses or large outputs

Frequently Asked Questions

How do I get my API key?

What if I reach my quota?

What models are available?