config.toml
The main configuration file lives at ~/.fellama/config.toml. All fields are optional
— FeLLAMA uses sensible defaults and can auto-discover models from the endpoint.
# LLM endpoint (OpenAI-compatible API)
endpoint = "http://localhost:8000/v1"
# Model identifier
model = "gpt-oss:120b"
# Agent temperature (0.0 - 1.0)
agent_temperature = 0.6
# Browserless endpoint for web automation
browserless_endpoint = "ws://localhost:3000"
# Enable trace logging for debugging
enable_trace_log = false
# Enable smartweb agent trace logging
enable_smartweb_tracelog = false
# Embedding model endpoint (for Vector DB)
embedding_endpoint = "http://localhost:8000/v1"
# Embedding model name
embedding_model = "text-embedding-model"
# Number of results for vector search
vector_top_k = 5
# Directories to search for skill packages
skills_paths = ["~/.fellama/skills/"]
Field Reference
Base URL for the LLM API. Must be OpenAI-compatible (/v1/chat/completions).
Works with vLLM, llama.cpp, Ollama, LiteLLM, OpenAI, and other compatible servers.
Default: "http://localhost:8000/v1"
Model identifier as known by the endpoint. If omitted, FeLLAMA auto-discovers from
/v1/models.
Controls LLM creativity. Lower values produce more deterministic output. Higher values increase variation.
Default: 0.6
WebSocket URL for the Browserless/Chrome DevTools Protocol service. Required only for the SmartWeb agent.
Default: "ws://localhost:3000"
When true, logs every message crossing a process or network boundary.
Includes LLM request/response bodies, WebSocket payloads, and subprocess I/O.
Default: false
Enables additional trace logging specific to the SmartWeb agent, including browser actions and page interactions.
Default: false
Base URL for the embedding model API. Must serve /v1/embeddings.
Only needed for the Vector DB.
Model name for generating embeddings. Used by the Vector DB for document chunking and search.
Number of results to return from vector similarity searches.
Default: 5
Directories where FeLLAMA looks for installed skill packages.
Default: ["~/.fellama/skills/"]
URL Rules
The file ~/.fellama/url_rules.toml controls how the SmartWeb agent classifies URLs
during web research. Sources are ranked by tier for prioritization and trust.
# Tier 1: Authoritative sources — prioritized and trusted
[tier1]
patterns = [
"docs\\.python\\.org",
"developer\\.mozilla\\.org",
"\\.gov$",
"arxiv\\.org"
]
# Tier 2: Community sources — useful but lower trust
[tier2]
patterns = [
"stackoverflow\\.com",
"medium\\.com",
"dev\\.to"
]
# Blacklisted: Never visit these domains
[blacklisted]
patterns = [
"malware-site\\.com"
]
| Tier | Behavior |
|---|---|
tier1 |
Authoritative sources. Prioritized in research, content given higher weight in corroboration. |
tier2 |
Community sources. Used for supplementary information and cross-referencing. |
tier3 |
Everything else (implicit). Visited but treated with standard weight. |
blacklisted |
Never visited. The agent skips these URLs entirely. |
Patterns are Rust regular expressions matched against the full URL domain.
CLI Arguments
All FeLLAMA binaries follow a standard argument contract. Same name always means the same thing across all agents and tools.
| Argument | Type | Description |
|---|---|---|
--request <TEXT> |
string | Inline text request (primary input) |
--file <PATH> |
repeatable | Local file path input |
--url <URL> |
repeatable | Remote URL input |
--json_arg <JSON> |
string | JSON object map of all arguments (machine-to-machine) |
--human |
flag | Pretty-print JSON output for terminal reading |
--model <MODEL> |
string | LLM model override (agents only) |
--endpoint <URL> |
string | LLM endpoint override (agents only) |
--task-id <UUID> |
string | Caller-assigned task UUID, echoed in response |
--parent-id <UUID> |
string | Parent task UUID for tracing nested calls |
--output-dir <DIR> |
string | Directory for multi-file output |
--output-format <FMT> |
enum | Output format: csv, markdown, json, plaintext |
--max-turns <N> |
integer | Maximum LLM turns in an agentic loop |
--dry-run |
flag | Validate and plan without side effects |
--session-root <DIR> |
string | Base directory for session dirs (default: ~/.fellama) |
Note: --endpoint and --model are only available
on agent binaries (not extractors like fellama-pdf-extractor). Passing them to
tool binaries causes exit code 2.
Environment Variables
| Variable | Purpose | Precedence |
|---|---|---|
AGENT_MODEL |
Default model name for all agents | Below CLI args and config.toml |
Environment variables are lower priority than both CLI arguments and config.toml settings. They serve as a fallback when neither is specified.
LLM Context Resolution
FeLLAMA resolves the LLM endpoint and model through a priority chain. The first source that provides a value wins.
--endpoint, --model
~/.fellama/config.toml
AGENT_MODEL
/v1/models endpoint
Session Directories
FeLLAMA persists all session state to disk. The root directory is ~/.fellama/
(customizable via --session-root).
SmartWeb Agent Sessions
~/.fellama/<session-id>/
├── session.log # NDJSON turn-by-turn log
├── trace.log # Full LLM I/O trace (if enabled)
├── MEMORY.md # Human-readable research memory
├── MEMORY.1.md # Archived memory (after compaction)
├── meta.json # Token counts, status
├── progress.json # Live progress snapshot
├── summary.json # Final summary
├── output.txt # Final output
├── links.txt # Saved URLs
├── page_cache/ # Cached distillations
│ ├── {hash}.json
│ └── {hash}.raw.txt
└── screenshots/ # Page captures
└── turn_N.png
Skill Worker Sessions
~/.fellama/skill-worker/<session-id>/
├── session.log # NDJSON lifecycle log
├── trace.log # Full trace (if enabled)
├── MEMORY.md # Session narrative
├── progress.json # Phase/turn snapshot
├── meta.json # Token counts & status
├── result.json # Canonical result payload
├── summary.json # Final summary
├── output.txt # Rendered output
└── evaluated_risk.txt # Safety evaluation result
Resume & Replay:
Use --resume <session-dir> to continue interrupted work.
Use --replay <session-dir> --turn N to re-run a specific LLM call for debugging.
Skills
Agent Skills are self-contained packages that extend FeLLAMA's capabilities.
They are installed to ~/.fellama/skills/ and tracked in skills-lock.json.
Installing Skills
cargo run --release --bin fellama-install-skill -- \
--request "install from github:user/repo"
Skills Lock File
The skills-lock.json file tracks installed skills with integrity checksums:
{
"version": 1,
"skills": {
"Skill Name": {
"source": "user/repo",
"sourceType": "github",
"computedHash": "8d262f08..."
}
}
}