Documentation
Build, deploy, and run sovereign AI agents
Overview
Scout is a platform for building autonomous AI agents that monitor data sources, research and qualify signals, and take action — all running on your infrastructure with your API keys.
The platform has three components:
Architect
AI-powered agent designer at myagentos.ai
Runtime
Python engine with monitors, qualification, and actions
Container
Rocky Linux 9 deployment with optional bundled LLM
Every agent you build is fully self-contained — you download the source code, run it on your machine or server, and own everything. No phone-home, no telemetry, no vendor lock-in.
Getting Started
Build Your First Agent
- Go to myagentos.ai/create
- Enter your LLM API key (Anthropic, OpenAI, or Gemini)
- Describe what you want to automate — the Architect will design your agent through conversation
- Confirm the configuration when it looks right
- Choose your deployment mode (Local Python, Container, or Sovereign)
- Download the agent package (.zip)
API Keys
Scout uses a Bring Your Own Key (BYOK) model. The API key you provide in the Architect powers the design conversation. Your agent also needs keys at runtime — these go in the .env file.
| Provider | Key Format | Free Tier |
|---|---|---|
| Anthropic | sk-ant-api03-... | Pay per use (~$0.01/signal) |
| OpenAI | sk-... | Pay per use (~$0.01/signal) |
| Gemini | AI... | Free tier available |
| Grok (xAI) | xai-... | Pay per use |
| Custom (Ollama, etc) | Any | Free (local) |
Agent Design
Every Scout agent has three stages: Monitor (detect signals), Qualify (score and research), and Act (take action on qualified signals).
Monitors
Monitors are data sources your agent watches. Each monitor polls on a schedule and produces signals.
| Type | Source | Auth Required |
|---|---|---|
| rss | Any RSS/Atom feed | No |
| imap | Email inbox (IMAP) | Yes (app password) |
| slack | Slack channel | Yes (bot token) |
| Subreddit or search | No (public) | |
| github | Repo events, issues | Optional (rate limits) |
| http_poll | Any HTTP endpoint | Varies |
| webhook | Incoming webhooks | No |
| price_feed | Stock/crypto prices | No (via yfinance) |
| technical_indicator | RSI, MACD, Bollinger, etc. | No |
Qualification
When a signal arrives, the agent researches it using web search, then sends it through AI-powered qualification. The qualifier scores each signal 1-10 based on your criteria and decides whether to act.
You set the minimum score threshold (e.g., 7/10) and the criteria in natural language. The Architect generates a detailed qualification prompt tuned to your use case.
Actions
Actions fire on qualified signals. Multiple actions can fire per signal.
| Type | What It Does |
|---|---|
| email_draft | Generates personalized email draft for review |
| slack_post | Posts to Slack channel via webhook |
| webhook_post | POSTs JSON to any URL |
| console | Prints to stdout (testing) |
| file_output | Appends to a file |
| alpaca_trade | Executes stock trades via Alpaca |
| binance_trade | Executes crypto trades via Binance |
Trading Agents
Scout supports autonomous trading with built-in safety rails. Trading agents use price feed and technical indicator monitors that bypass qualification (signals fire directly to trade actions).
--live to enable real trading (requires explicit confirmation). Past signals don't guarantee future performance.# Paper trading (safe default)
python3 main.py --daemon
# Live trading (asks for confirmation)
python3 main.py --daemon --live
# Force paper mode
python3 main.py --daemon --paperDeployment
Three ways to run your agent. All start from the same downloaded zip file.
Local Python
Run directly with Python 3.10+. Simplest option.
Container
Rocky Linux 9. ~180MB. Podman or Docker.
Sovereign
Air-gapped. Bundled LLM. No internet needed.
Local Python
unzip my-agent.zip && cd my-agent
pip install -r requirements.txt
cp .env.example .env
# Add your API keys to .env
# Live dashboard
python3 main.py --dashboard
# Headless
python3 main.py --daemon
# Test one cycle
python3 main.py --onceRocky Linux Container
Requires Podman (recommended) or Docker.
unzip my-agent.zip && cd my-agent
cp .env.example .env
# Add your API keys to .env
# Build the Rocky Linux container (~180MB)
./container/build.sh
# Run with live dashboard
./container/run.sh --config ./config.yaml --env ./.env
# Run headless in background
./container/run.sh --config ./config.yaml --env ./.env --detach
# Include trading dependencies
./container/build.sh --tradingFully Sovereign
Bundles Ollama and a local LLM model into the container image. Once built, no internet connection is needed. No API keys for the LLM. No data leaves your machine.
unzip my-agent.zip && cd my-agent
# Build with bundled model (downloads once, ~2-6GB image)
./container/build.sh --sovereign --model llama3.2:3b
# Run — fully offline from this point
./container/run.sh --config ./config.yaml --env ./.env| Model | Size | RAM | Speed (CPU) | Best For |
|---|---|---|---|---|
| llama3.2:3b | ~2GB | 4GB+ | ~10 tok/s | Most agents, fast, lightweight |
| phi3:mini | ~2.3GB | 4GB+ | ~8 tok/s | Strong reasoning |
| mistral:7b | ~4GB | 8GB+ | ~5 tok/s | Best quality on CPU |
| llama3.1:8b | ~4.7GB | 8GB+ | ~4 tok/s | Newest Llama |
| gemma2:9b | ~5.4GB | 12GB+ | ~3 tok/s | Google's best small model |
Cloud Deployment
Deploy your container to any cloud provider. You don't need your own hardware.
CPU VPS ($5-10/month)
Best for most agents. A 3B sovereign model runs fine on CPU. Works on Hetzner, DigitalOcean, Vultr, Linode.
# Build locally
./container/build.sh --sovereign --model llama3.2:3b
# Save to file and copy to VPS
podman save scout-agent:sovereign -o scout-agent.tar
scp scout-agent.tar config.yaml .env user@your-vps:~/
# On the VPS
ssh user@your-vps
podman load -i scout-agent.tar
podman run -d --name scout-agent --restart=always \
-v ~/config.yaml:/app/config.yaml:ro \
-v ~/.env:/app/.env:ro \
-v ~/data:/app/data \
scout-agent:sovereignGPU Cloud
For larger models (7B+) or low-latency needs. Lambda Labs ($0.80/hr), RunPod ($0.39/hr), Vast.ai ($0.15/hr).
# Push to container registry
podman push scout-agent:sovereign ghcr.io/yourname/scout-agent:sovereign
# On GPU instance
podman pull ghcr.io/yourname/scout-agent:sovereign
podman run -d --name scout-agent \
--device nvidia.com/gpu=all \
-v ./config.yaml:/app/config.yaml:ro \
scout-agent:sovereignAir-Gapped Deployment
For classified or disconnected environments. Build on an internet-connected machine, transport via secure media.
# On internet-connected machine:
./container/build.sh --sovereign --model llama3.2:3b
podman save scout-agent:sovereign -o scout-agent.tar
# Copy scout-agent.tar + config.yaml + .env to USB drive
# On air-gapped machine:
podman load -i scout-agent.tar
podman run -d --name scout-agent \
-v ./config.yaml:/app/config.yaml:ro \
-v ./.env:/app/.env:ro \
scout-agent:sovereignDashboard
Scout includes a live terminal dashboard built with Rich. It shows agent status, activity log, stats, recent drafts, and monitor/action configuration — all updating in real-time.
# Local
python3 main.py --dashboard
# Container (default when not using --detach)
./container/run.sh --config ./config.yaml --env ./.envThe dashboard shows:
- Agent status (running/idle), uptime, and poll countdown
- Signals found, qualified, drafted, and errors per cycle
- Live activity log with timestamps
- Recent drafts with scores and review status
- Configured monitors and actions
Press Ctrl+C to cleanly shut down the agent.
Draft Review
When an agent qualifies a signal and generates an email draft or action, it saves it for review. Use the CLI to manage drafts.
# List pending drafts
python3 main.py review list
# View a specific draft
python3 main.py review show 1
# Approve a draft
python3 main.py review approve 1
# Reject a draft
python3 main.py review reject 1
# Show stats
python3 main.py review statsConfiguration
config.yaml
The agent's configuration file. Generated by the Architect, but fully editable. Key fields:
name: my-agent
description: Monitors HN for AI news
poll_interval: 300 # seconds between polls
llm_provider: anthropic # or openai, gemini, grok, custom
llm_model: claude-sonnet-4-20250514
research_provider: brave # or tavily, perplexity
qualification:
min_score: 7
criteria: "Relevant to AI infrastructure..."
monitors:
- name: hn-feed
type: rss
url: https://news.ycombinator.com/rss
keywords: ["AI", "GPU", "infrastructure"]
actions:
- name: console-log
type: console
notify_on: qualifiedEnvironment Variables
| Variable | Standard | Sovereign | Description |
|---|---|---|---|
| ANTHROPIC_API_KEY | Required | Not needed | Claude API key |
| OPENAI_API_KEY | Required | Not needed | OpenAI API key |
| LLM_BASE_URL | Optional | Pre-configured | Custom LLM endpoint |
| LLM_API_KEY | Optional | Pre-configured | Custom LLM key |
| BRAVE_API_KEY | Optional | Optional | Web search for research |
| SLACK_BOT_TOKEN | If using Slack | If using Slack | Slack bot token |
| ALPACA_API_KEY | If trading | If trading | Alpaca trading key |
| ALPACA_API_SECRET | If trading | If trading | Alpaca trading secret |
Sovereign Models
When building in sovereign mode, you choose which model to bundle. The model runs locally via Ollama inside the container.
# List available models
./container/build.sh --help
# Common choices
./container/build.sh --sovereign --model llama3.2:3b # Fast, lightweight
./container/build.sh --sovereign --model phi3:mini # Strong reasoning
./container/build.sh --sovereign --model mistral:7b # Best quality
./container/build.sh --sovereign --model llama3.1:8b # Newest--model.Architecture
Scout agents follow a four-stage pipeline:
Monitor
Fetch signals
Research
Web search
Qualify
AI scoring
Act
Execute
Signal deduplication: Every processed signal is tracked in SQLite. Signals are never processed twice.
Trading shortcut: Price feed and technical indicator signals bypass research and qualification, firing directly to trade actions.
Container layers:
| Layer | Standard | Sovereign |
|---|---|---|
| Base OS | Rocky Linux 9 minimal | Rocky Linux 9 minimal |
| Runtime | Python 3.11 | Python 3.11 + Ollama |
| Dependencies | pip packages in /app/vendor | Same + LLM model weights |
| App | Scout engine + config | Same |
| User | Non-root (scout:scout) | Same |
| Image size | ~180MB | ~2-6GB |
Troubleshooting
Container exits immediately
podman logs scout-agentUsually means missing config.yaml, invalid API keys, or a Python import error. Check the logs for the specific error.
Sovereign: Ollama won't start
The model needs to fit in RAM. A 3B model needs 4GB+, a 7B model needs 8GB+. Try a smaller model:
./container/build.sh --sovereign --model llama3.2:3bPermission denied on config files
chmod 644 config.yaml .envPython module not found
If using trading monitors/actions, rebuild with the trading flag:
./container/build.sh --tradingGPU not detected in container
# Verify NVIDIA Container Toolkit is installed
podman run --device nvidia.com/gpu=all --rm nvidia/cuda:12.0-base nvidia-smiValidate config without running
python3 main.py --validate