Documentation

Build, deploy, and run sovereign AI agents

Overview

Scout is a platform for building autonomous AI agents that monitor data sources, research and qualify signals, and take action — all running on your infrastructure with your API keys.

The platform has three components:

Architect

AI-powered agent designer at myagentos.ai

Runtime

Python engine with monitors, qualification, and actions

Container

Rocky Linux 9 deployment with optional bundled LLM

Every agent you build is fully self-contained — you download the source code, run it on your machine or server, and own everything. No phone-home, no telemetry, no vendor lock-in.

Getting Started

Build Your First Agent

  1. Go to myagentos.ai/create
  2. Enter your LLM API key (Anthropic, OpenAI, or Gemini)
  3. Describe what you want to automate — the Architect will design your agent through conversation
  4. Confirm the configuration when it looks right
  5. Choose your deployment mode (Local Python, Container, or Sovereign)
  6. Download the agent package (.zip)
Quick test: Try "Monitor Hacker News for AI infrastructure posts and log them to console" — it creates a simple RSS agent you can test in under a minute.

API Keys

Scout uses a Bring Your Own Key (BYOK) model. The API key you provide in the Architect powers the design conversation. Your agent also needs keys at runtime — these go in the .env file.

ProviderKey FormatFree Tier
Anthropicsk-ant-api03-...Pay per use (~$0.01/signal)
OpenAIsk-...Pay per use (~$0.01/signal)
GeminiAI...Free tier available
Grok (xAI)xai-...Pay per use
Custom (Ollama, etc)AnyFree (local)
In sovereign mode, you don't need an LLM API key at all — the model runs locally inside the container.

Agent Design

Every Scout agent has three stages: Monitor (detect signals), Qualify (score and research), and Act (take action on qualified signals).

Monitors

Monitors are data sources your agent watches. Each monitor polls on a schedule and produces signals.

TypeSourceAuth Required
rssAny RSS/Atom feedNo
imapEmail inbox (IMAP)Yes (app password)
slackSlack channelYes (bot token)
redditSubreddit or searchNo (public)
githubRepo events, issuesOptional (rate limits)
http_pollAny HTTP endpointVaries
webhookIncoming webhooksNo
price_feedStock/crypto pricesNo (via yfinance)
technical_indicatorRSI, MACD, Bollinger, etc.No

Qualification

When a signal arrives, the agent researches it using web search, then sends it through AI-powered qualification. The qualifier scores each signal 1-10 based on your criteria and decides whether to act.

You set the minimum score threshold (e.g., 7/10) and the criteria in natural language. The Architect generates a detailed qualification prompt tuned to your use case.

Actions

Actions fire on qualified signals. Multiple actions can fire per signal.

TypeWhat It Does
email_draftGenerates personalized email draft for review
slack_postPosts to Slack channel via webhook
webhook_postPOSTs JSON to any URL
consolePrints to stdout (testing)
file_outputAppends to a file
alpaca_tradeExecutes stock trades via Alpaca
binance_tradeExecutes crypto trades via Binance

Trading Agents

Scout supports autonomous trading with built-in safety rails. Trading agents use price feed and technical indicator monitors that bypass qualification (signals fire directly to trade actions).

Trading agents default to paper/testnet mode. Use --live to enable real trading (requires explicit confirmation). Past signals don't guarantee future performance.
# Paper trading (safe default)
python3 main.py --daemon

# Live trading (asks for confirmation)
python3 main.py --daemon --live

# Force paper mode
python3 main.py --daemon --paper

Deployment

Three ways to run your agent. All start from the same downloaded zip file.

Local Python

Run directly with Python 3.10+. Simplest option.

Container

Rocky Linux 9. ~180MB. Podman or Docker.

Sovereign

Air-gapped. Bundled LLM. No internet needed.

Local Python

unzip my-agent.zip && cd my-agent
pip install -r requirements.txt
cp .env.example .env
# Add your API keys to .env

# Live dashboard
python3 main.py --dashboard

# Headless
python3 main.py --daemon

# Test one cycle
python3 main.py --once

Rocky Linux Container

Requires Podman (recommended) or Docker.

unzip my-agent.zip && cd my-agent
cp .env.example .env
# Add your API keys to .env

# Build the Rocky Linux container (~180MB)
./container/build.sh

# Run with live dashboard
./container/run.sh --config ./config.yaml --env ./.env

# Run headless in background
./container/run.sh --config ./config.yaml --env ./.env --detach

# Include trading dependencies
./container/build.sh --trading

Fully Sovereign

Bundles Ollama and a local LLM model into the container image. Once built, no internet connection is needed. No API keys for the LLM. No data leaves your machine.

unzip my-agent.zip && cd my-agent

# Build with bundled model (downloads once, ~2-6GB image)
./container/build.sh --sovereign --model llama3.2:3b

# Run — fully offline from this point
./container/run.sh --config ./config.yaml --env ./.env
ModelSizeRAMSpeed (CPU)Best For
llama3.2:3b~2GB4GB+~10 tok/sMost agents, fast, lightweight
phi3:mini~2.3GB4GB+~8 tok/sStrong reasoning
mistral:7b~4GB8GB+~5 tok/sBest quality on CPU
llama3.1:8b~4.7GB8GB+~4 tok/sNewest Llama
gemma2:9b~5.4GB12GB+~3 tok/sGoogle's best small model

Cloud Deployment

Deploy your container to any cloud provider. You don't need your own hardware.

CPU VPS ($5-10/month)

Best for most agents. A 3B sovereign model runs fine on CPU. Works on Hetzner, DigitalOcean, Vultr, Linode.

# Build locally
./container/build.sh --sovereign --model llama3.2:3b

# Save to file and copy to VPS
podman save scout-agent:sovereign -o scout-agent.tar
scp scout-agent.tar config.yaml .env user@your-vps:~/

# On the VPS
ssh user@your-vps
podman load -i scout-agent.tar
podman run -d --name scout-agent --restart=always \
    -v ~/config.yaml:/app/config.yaml:ro \
    -v ~/.env:/app/.env:ro \
    -v ~/data:/app/data \
    scout-agent:sovereign

GPU Cloud

For larger models (7B+) or low-latency needs. Lambda Labs ($0.80/hr), RunPod ($0.39/hr), Vast.ai ($0.15/hr).

# Push to container registry
podman push scout-agent:sovereign ghcr.io/yourname/scout-agent:sovereign

# On GPU instance
podman pull ghcr.io/yourname/scout-agent:sovereign
podman run -d --name scout-agent \
    --device nvidia.com/gpu=all \
    -v ./config.yaml:/app/config.yaml:ro \
    scout-agent:sovereign

Air-Gapped Deployment

For classified or disconnected environments. Build on an internet-connected machine, transport via secure media.

# On internet-connected machine:
./container/build.sh --sovereign --model llama3.2:3b
podman save scout-agent:sovereign -o scout-agent.tar
# Copy scout-agent.tar + config.yaml + .env to USB drive

# On air-gapped machine:
podman load -i scout-agent.tar
podman run -d --name scout-agent \
    -v ./config.yaml:/app/config.yaml:ro \
    -v ./.env:/app/.env:ro \
    scout-agent:sovereign
No internet at any point on the target machine. The LLM, Python runtime, and all dependencies are baked into the image.

Dashboard

Scout includes a live terminal dashboard built with Rich. It shows agent status, activity log, stats, recent drafts, and monitor/action configuration — all updating in real-time.

# Local
python3 main.py --dashboard

# Container (default when not using --detach)
./container/run.sh --config ./config.yaml --env ./.env

The dashboard shows:

  • Agent status (running/idle), uptime, and poll countdown
  • Signals found, qualified, drafted, and errors per cycle
  • Live activity log with timestamps
  • Recent drafts with scores and review status
  • Configured monitors and actions

Press Ctrl+C to cleanly shut down the agent.

Draft Review

When an agent qualifies a signal and generates an email draft or action, it saves it for review. Use the CLI to manage drafts.

# List pending drafts
python3 main.py review list

# View a specific draft
python3 main.py review show 1

# Approve a draft
python3 main.py review approve 1

# Reject a draft
python3 main.py review reject 1

# Show stats
python3 main.py review stats

Configuration

config.yaml

The agent's configuration file. Generated by the Architect, but fully editable. Key fields:

name: my-agent
description: Monitors HN for AI news
poll_interval: 300  # seconds between polls
llm_provider: anthropic  # or openai, gemini, grok, custom
llm_model: claude-sonnet-4-20250514
research_provider: brave  # or tavily, perplexity

qualification:
  min_score: 7
  criteria: "Relevant to AI infrastructure..."

monitors:
  - name: hn-feed
    type: rss
    url: https://news.ycombinator.com/rss
    keywords: ["AI", "GPU", "infrastructure"]

actions:
  - name: console-log
    type: console
    notify_on: qualified

Environment Variables

VariableStandardSovereignDescription
ANTHROPIC_API_KEYRequiredNot neededClaude API key
OPENAI_API_KEYRequiredNot neededOpenAI API key
LLM_BASE_URLOptionalPre-configuredCustom LLM endpoint
LLM_API_KEYOptionalPre-configuredCustom LLM key
BRAVE_API_KEYOptionalOptionalWeb search for research
SLACK_BOT_TOKENIf using SlackIf using SlackSlack bot token
ALPACA_API_KEYIf tradingIf tradingAlpaca trading key
ALPACA_API_SECRETIf tradingIf tradingAlpaca trading secret

Sovereign Models

When building in sovereign mode, you choose which model to bundle. The model runs locally via Ollama inside the container.

# List available models
./container/build.sh --help

# Common choices
./container/build.sh --sovereign --model llama3.2:3b      # Fast, lightweight
./container/build.sh --sovereign --model phi3:mini         # Strong reasoning
./container/build.sh --sovereign --model mistral:7b        # Best quality
./container/build.sh --sovereign --model llama3.1:8b       # Newest
Any model available on ollama.com/library works. Just pass the model name to --model.

Architecture

Scout agents follow a four-stage pipeline:

Monitor

Fetch signals

Research

Web search

Qualify

AI scoring

Act

Execute

Signal deduplication: Every processed signal is tracked in SQLite. Signals are never processed twice.

Trading shortcut: Price feed and technical indicator signals bypass research and qualification, firing directly to trade actions.

Container layers:

LayerStandardSovereign
Base OSRocky Linux 9 minimalRocky Linux 9 minimal
RuntimePython 3.11Python 3.11 + Ollama
Dependenciespip packages in /app/vendorSame + LLM model weights
AppScout engine + configSame
UserNon-root (scout:scout)Same
Image size~180MB~2-6GB

Troubleshooting

Container exits immediately

podman logs scout-agent

Usually means missing config.yaml, invalid API keys, or a Python import error. Check the logs for the specific error.

Sovereign: Ollama won't start

The model needs to fit in RAM. A 3B model needs 4GB+, a 7B model needs 8GB+. Try a smaller model:

./container/build.sh --sovereign --model llama3.2:3b

Permission denied on config files

chmod 644 config.yaml .env

Python module not found

If using trading monitors/actions, rebuild with the trading flag:

./container/build.sh --trading

GPU not detected in container

# Verify NVIDIA Container Toolkit is installed
podman run --device nvidia.com/gpu=all --rm nvidia/cuda:12.0-base nvidia-smi

Validate config without running

python3 main.py --validate
Sovereign AI agents. Built by you. Owned by you.