Firefly Forage
Isolated, ephemeral sandboxes for AI coding agents on NixOS.
Firefly Forage is a NixOS module that creates lightweight, isolated environments for running AI coding assistants like Claude Code. Each sandbox is a systemd-nspawn container with:
- Shared nix store - Read-only bind mount, no duplication
- Ephemeral root - Fresh state on every reset
- Persistent workspace - Your project files survive restarts
- Auth obfuscation - API keys injected at runtime, not visible in environment
Why Forage?
AI coding agents are powerful but unpredictable. They can:
- Install packages you didn’t ask for
- Modify system configuration
- Accumulate cruft over long sessions
- Potentially exfiltrate sensitive data
Forage addresses these concerns by running agents in disposable containers. When things go wrong, just reset the sandbox and start fresh.
Key Features
Multi-Agent Support
Run multiple sandboxes simultaneously, each with its own:
- SSH port for direct access
- Tmux session for persistence
- Workspace bind mount
JJ Workspace Integration
Create multiple sandboxes working on the same repository using Jujutsu workspaces. Each agent gets an isolated working copy while sharing the repository’s history.
# Two agents working on the same repo in parallel
forage-ctl up agent-a --template claude --repo ~/projects/myrepo
forage-ctl up agent-b --template claude --repo ~/projects/myrepo
Composable Workspace Mounts
Assemble a sandbox’s filesystem from multiple sources — mount multiple repos, overlay branches, and mix VCS-backed and literal bind mounts:
# Template mounts: main workspace + beads overlay + named data repo
forage-ctl up dev -t claude-beads --repo ~/projects/myrepo --repo data=~/datasets
Nix Store Efficiency
Sandboxes share the host’s /nix/store read-only. When an agent runs nix shell nixpkgs#ripgrep, the build happens on the host via the nix daemon socket—no duplication, instant availability.
Template System
Define sandbox configurations declaratively in your NixOS config:
templates.claude = {
description = "Claude Code sandbox";
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
extraPackages = with pkgs; [ ripgrep fd jq ];
network = "full";
};
Quick Example
# Create a sandbox for your project
forage-ctl up myproject -t claude -w ~/projects/myproject
# Connect and start working
forage-ctl ssh myproject
# Inside the sandbox, claude is ready to use
claude
# When done, clean up
forage-ctl down myproject
Requirements
- NixOS (tested on 24.11+)
- systemd-nspawn (included in NixOS)
- extra-container (managed by the module)
Status
Firefly Forage has completed all planned implementation phases:
- Phases 1-3: Basic sandboxing, JJ workspaces, UX improvements
- Phase 4: Go rewrite of forage-ctl
- Phase 5: Gateway & interactive picker
- Phase 6: Network isolation modes
- Phase 7: API proxy for auth injection
- Phase 8: Git worktree backend
- Phase 9: Multi-runtime support (nspawn, Docker, Podman, Apple Container)
See the DESIGN.md for architecture details.
Installation
Firefly Forage is distributed as a Nix flake. Add it to your NixOS configuration to get started.
Add the Flake Input
In your flake.nix:
{
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable";
firefly-forage = {
url = "github:firefly-engineering/firefly-forage";
inputs.nixpkgs.follows = "nixpkgs";
};
};
outputs = { self, nixpkgs, firefly-forage, ... }: {
nixosConfigurations.myhost = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
./configuration.nix
firefly-forage.nixosModules.default
];
};
};
}
Import the Module
The module is automatically available after adding the flake input. You can also import it explicitly:
{ inputs, ... }:
{
imports = [ inputs.firefly-forage.nixosModules.default ];
}
Enable the Service
Add basic configuration to enable Forage:
{ config, pkgs, ... }:
{
services.firefly-forage = {
enable = true;
user = "myuser"; # Your username
authorizedKeys = config.users.users.myuser.openssh.authorizedKeys.keys;
};
}
Rebuild
Apply the configuration:
sudo nixos-rebuild switch --flake .#myhost
After rebuilding, the forage-ctl command will be available system-wide.
Verify Installation
# Should show help
forage-ctl --help
# Should show no templates yet
forage-ctl templates
Next Steps
Now configure your first template to define what agents and packages your sandboxes will include.
Configuration
Forage is configured through your NixOS configuration. This page covers all available options.
Minimal Configuration
services.firefly-forage = {
enable = true;
user = "myuser";
authorizedKeys = [ "ssh-ed25519 AAAA..." ];
secrets = {
anthropic = "/run/secrets/anthropic-api-key";
};
templates.claude = {
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
};
};
Full Configuration Reference
Top-Level Options
enable
Whether to enable Firefly Forage.
services.firefly-forage.enable = true;
user
The host user whose UID/GID will be used inside sandboxes. This ensures files created in the workspace have correct ownership.
services.firefly-forage.user = "myuser";
authorizedKeys
SSH public keys that can access sandboxes. Typically you’ll use the same keys as your user account:
services.firefly-forage.authorizedKeys =
config.users.users.myuser.openssh.authorizedKeys.keys;
portRange
Port range for sandbox SSH servers. Each sandbox gets one port from this range.
services.firefly-forage.portRange = {
from = 2200; # default
to = 2299; # default
};
stateDir
Directory for Forage state (sandbox metadata, JJ workspaces).
services.firefly-forage.stateDir = "/var/lib/firefly-forage"; # default
Secrets
Map secret names to file paths containing API keys:
services.firefly-forage.secrets = {
anthropic = "/run/secrets/anthropic-api-key";
openai = "/run/secrets/openai-api-key";
};
With sops-nix:
services.firefly-forage.secrets = {
anthropic = config.sops.secrets.anthropic-api-key.path;
};
With agenix:
services.firefly-forage.secrets = {
anthropic = config.age.secrets.anthropic-api-key.path;
};
Templates
Templates define sandbox configurations that can be instantiated multiple times.
Basic Template
services.firefly-forage.templates.claude = {
description = "Claude Code sandbox";
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
};
Template with Extra Packages
services.firefly-forage.templates.claude = {
description = "Claude Code with dev tools";
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
extraPackages = with pkgs; [
ripgrep
fd
jq
tree
htop
];
};
Multi-Agent Template
services.firefly-forage.templates.multi = {
description = "Multiple AI agents";
agents = {
claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
aider = {
package = pkgs.aider;
secretName = "openai";
authEnvVar = "OPENAI_API_KEY";
};
};
extraPackages = with pkgs; [ ripgrep fd ];
};
Host Config Directory Mounting
Mount host configuration directories into sandboxes for persistent authentication. This is useful for agents like Claude Code that store credentials in ~/.claude/:
services.firefly-forage.templates.claude = {
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
hostConfigDir = "~/.claude"; # mounts to /home/agent/.claude
};
};
Options:
hostConfigDir- Host directory to mount (supports~expansion)containerConfigDir- Override the container mount point (default:/home/agent/.<dirname>)hostConfigDirReadOnly- Mount as read-only (default:falseto allow token refresh)
Example with all options:
services.firefly-forage.templates.claude = {
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
hostConfigDir = "~/.claude";
containerConfigDir = "/home/agent/.claude"; # explicit path
hostConfigDirReadOnly = false; # allow writing (default)
};
};
Agent Permissions
Control what agents can do without prompting. Permissions are written to a settings file and bind-mounted read-only into the container.
Full autonomy — skip all permission prompts:
services.firefly-forage.templates.claude-auto = {
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
permissions.skipAll = true;
};
};
Granular allowlist — approve specific tools/patterns:
services.firefly-forage.templates.claude-restricted = {
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
permissions = {
allow = [ "Read" "Glob" "Grep" "Edit(src/**)" "Bash(npm run *)" ];
deny = [ "Bash(rm -rf *)" ];
};
};
};
Options:
permissions.skipAll- Bypass all permission checks (cannot be combined withallow/deny)permissions.allow- Rules to auto-approve (agent-specific format)permissions.deny- Rules to always block
For Claude, this generates /etc/claude-code/managed-settings.json in the container (managed scope — highest precedence). Permissions and hostConfigDir can coexist — they target different paths.
Workspace Mounts
Templates can define composable workspace mounts — multiple mount points from different sources:
services.firefly-forage.templates.multi-mount = {
description = "Multi-mount workspace";
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
workspace.mounts = {
main = {
containerPath = "/workspace";
mode = "jj";
};
docs = {
containerPath = "/workspace/docs";
hostPath = "~/shared-docs";
readOnly = true;
};
};
};
When workspace.mounts is set, the --repo flag becomes optional. See Workspace Mounts for the full guide.
The workspace.useBeads shorthand overlays a beads workspace:
workspace.useBeads = {
enable = true;
package = pkgs.beads;
# branch = "beads-sync"; # default
# containerPath = "/workspace/.beads"; # default
};
Network Modes
Control network access for sandboxes:
services.firefly-forage.templates = {
# Full internet access (default)
claude = {
network = "full";
# ...
};
# No network access (air-gapped)
isolated = {
network = "none";
# ...
};
# Restricted to specific hosts
restricted = {
network = "restricted";
allowedHosts = [ "api.anthropic.com" "api.openai.com" ];
# ...
};
};
You can also change network modes at runtime using forage-ctl network.
Complete Example
{ config, pkgs, ... }:
{
services.firefly-forage = {
enable = true;
user = "developer";
authorizedKeys = config.users.users.developer.openssh.authorizedKeys.keys;
portRange = {
from = 2200;
to = 2250;
};
secrets = {
anthropic = config.sops.secrets.anthropic-api-key.path;
openai = config.sops.secrets.openai-api-key.path;
};
templates = {
claude = {
description = "Claude Code for general development";
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
extraPackages = with pkgs; [ ripgrep fd jq yq tree ];
network = "full";
};
claude-auto = {
description = "Claude Code with full autonomy";
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
permissions.skipAll = true;
};
};
claude-isolated = {
description = "Claude Code without network";
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
network = "none";
};
};
};
}
Next Steps
With configuration in place, create your first sandbox.
First Sandbox
This guide walks you through creating and using your first Forage sandbox.
Prerequisites
- Forage is installed and configured
- You have at least one template defined
- Your API key secrets are in place
List Available Templates
First, see what templates are available:
forage-ctl templates
Output:
TEMPLATE AGENTS NETWORK DESCRIPTION
claude claude full Claude Code for general development
claude-isolated claude none Claude Code without network
Create a Sandbox
Create a sandbox bound to a project directory:
forage-ctl up myproject --template claude --repo ~/projects/myproject --direct
The --direct flag mounts the directory directly without VCS isolation. If your project is a JJ or Git repository and you omit --direct, Forage will automatically create an isolated workspace.
You’ll see output like:
ℹ Creating sandbox 'myproject' from template 'claude'
ℹ Mode: direct
ℹ Workspace: /home/user/projects/myproject → /workspace
ℹ SSH port: 2200
ℹ Network slot: 1 (IP: 192.168.100.11)
ℹ Creating container with extra-container...
ℹ Waiting for SSH to become available on port 2200...
✓ Sandbox 'myproject' created successfully
ℹ Connect with: forage-ctl ssh myproject
Connect to the Sandbox
SSH into the sandbox:
forage-ctl ssh myproject
This attaches to a tmux session inside the container. You’ll land in /workspace, which is your project directory.
Use the Agent
Inside the sandbox, the configured agent is ready to use:
# Start Claude Code
claude
# Or run a one-off command
claude "explain this codebase"
The agent has access to:
- Your project files in
/workspace - Tools specified in
extraPackages - Any nix package via
nix run nixpkgs#<package>
Tmux Basics
The sandbox uses tmux for session persistence:
- Detach:
Ctrl-b d(leaves agent running) - Reattach:
forage-ctl ssh myproject - New window:
Ctrl-b c - Switch windows:
Ctrl-b n/Ctrl-b p - Scrollback:
Ctrl-b [then arrow keys,qto exit
Check Sandbox Status
List running sandboxes:
forage-ctl ps
Output:
NAME TEMPLATE PORT MODE WORKSPACE STATUS
myproject claude 2200 direct /home/user/projects/myproject ✓ healthy
Reset if Needed
If the sandbox gets into a bad state, reset it:
forage-ctl reset myproject
This destroys and recreates the container while preserving:
- Your workspace files
- The sandbox configuration
Clean Up
When done, remove the sandbox:
forage-ctl down myproject
This:
- Stops the container
- Removes secrets
- Cleans up metadata
- Removes injected skill files from workspace
Next Steps
- Learn about workspace mounts for composable multi-mount sandboxes
- Learn about JJ workspaces for parallel agent work
- See the full CLI reference
- Understand skill injection
CLI Reference
Complete reference for the forage-ctl command-line tool.
Synopsis
forage-ctl <command> [options]
Commands
templates
List available sandbox templates.
forage-ctl templates
Output:
TEMPLATE AGENTS NETWORK DESCRIPTION
claude claude full Claude Code sandbox
multi claude,aider full Multi-agent sandbox
up
Create and start a sandbox.
forage-ctl up <name> --template <template> [--repo <path>] [options]
Arguments:
| Argument | Description |
|---|---|
<name> | Unique name for the sandbox |
Options:
| Option | Description |
|---|---|
--template, -t <name> | Template to use (required) |
--repo, -r <path> | Repository or directory path (repeatable, see below) |
--direct | Mount directory directly, skipping VCS isolation |
--ssh-key <key> | SSH public key for sandbox access (can be repeated) |
--ssh-key-path <path> | Path to SSH private key for agent push access |
--git-user <name> | Git user.name for agent commits |
--git-email <email> | Git user.email for agent commits |
--no-mux-config | Don’t mount host multiplexer config into sandbox |
--repo Flag:
The --repo flag is repeatable and supports named parameters:
--repo <path> # default (unnamed) repo
--repo <name>=<path> # named repo
When the template defines workspace.mounts, mounts reference repos by name. --repo is not required if every mount specifies hostPath or an absolute repo path. See Workspace Mounts for details.
Workspace Modes:
The workspace mode is determined automatically based on the --repo path and flags:
| Mode | Condition | Behavior |
|---|---|---|
| Direct | --direct flag used | Mounts directory directly at /workspace |
| JJ workspace | Path contains .jj/ directory | Creates isolated JJ workspace |
| Git worktree | Path contains .git/ directory | Creates git worktree with branch forage-<name> |
Examples:
# Direct mount (no VCS isolation)
forage-ctl up myproject -t claude --repo ~/projects/myproject --direct
# JJ workspace (auto-detected, creates isolated working copy)
forage-ctl up agent-a -t claude --repo ~/projects/jj-repo
# Git worktree (auto-detected, creates isolated worktree)
forage-ctl up agent-b -t claude --repo ~/projects/git-repo
# With SSH key for push access
forage-ctl up myproject -t claude --repo ~/projects/myrepo --ssh-key-path ~/.ssh/id_ed25519
# With git identity for commits
forage-ctl up myproject -t claude --repo ~/projects/myrepo --git-user "Agent" --git-email "agent@example.com"
# Named repos for multi-mount templates
forage-ctl up dev -t monorepo --repo ~/main-project --repo data=~/datasets
# No --repo when template specifies all paths
forage-ctl up dev -t self-contained
down
Stop and remove a sandbox.
forage-ctl down <name>
Arguments:
| Argument | Description |
|---|---|
<name> | Name of the sandbox to remove |
Example:
forage-ctl down myproject
Cleanup performed:
- Stops and destroys the container
- Removes secrets from
/var/lib/forage/secrets/<name>/ - For each VCS-backed mount: removes the workspace/worktree via the appropriate VCS command
- For literal bind mounts (
hostPath): no cleanup (host directory untouched) - Removes managed workspace subdirectories
- Removes skills file and container configuration
- Deletes sandbox metadata
ps
List sandboxes with health status.
forage-ctl ps
Output:
NAME TEMPLATE PORT MODE WORKSPACE STATUS
myproject claude 2200 direct /home/user/projects/myproj ✓ healthy
agent-a claude 2201 jj ...forage/workspaces/agent-a ✓ healthy
agent-b claude 2202 git-worktree ...forage/workspaces/agent-b ● stopped
Columns:
| Column | Description |
|---|---|
| NAME | Sandbox name |
| TEMPLATE | Template used |
| PORT | SSH port |
| MODE | direct (direct mount), jj (JJ workspace), or git-worktree (git worktree) |
| WORKSPACE | Path mounted at /workspace |
| STATUS | Health status (see below) |
Status values:
| Status | Description |
|---|---|
✓ healthy | Container running, SSH reachable, tmux session active |
⚠ unhealthy | Container running but SSH not reachable |
○ no-tmux | Container running, SSH works, but no tmux session |
● stopped | Container not running |
status
Show detailed sandbox status and health information.
forage-ctl status <name>
Arguments:
| Argument | Description |
|---|---|
<name> | Name of the sandbox |
Example output:
Sandbox: myproject
========================================
Configuration:
Template: claude
Workspace: /home/user/projects/myproject
Mode: direct
SSH Port: 2200
Container IP: 192.168.100.11
Created: 2024-01-15T10:30:00+00:00
Container Status:
Running: yes
Uptime: 2h 30m
Health Checks:
SSH: reachable
Tmux Session: active
Tmux Windows:
- 0:bash
- 1:claude
Connect:
forage-ctl ssh myproject
ssh -p 2200 agent@localhost
Use this command for debugging connectivity issues or checking sandbox health.
ssh
Connect to a sandbox via SSH, attaching to the tmux session.
forage-ctl ssh <name>
Arguments:
| Argument | Description |
|---|---|
<name> | Name of the sandbox |
This runs:
ssh -p <port> -t agent@localhost 'tmux attach -t forage || tmux new -s forage'
Tmux controls:
- Detach:
Ctrl-b d - New window:
Ctrl-b c - Next/prev window:
Ctrl-b n/Ctrl-b p
exec
Execute a command inside a sandbox.
forage-ctl exec <name> -- <command>
Arguments:
| Argument | Description |
|---|---|
<name> | Name of the sandbox |
<command> | Command to execute |
Examples:
# Check agent version
forage-ctl exec myproject -- claude --version
# Run a script
forage-ctl exec myproject -- bash -c 'cd /workspace && ./build.sh'
# List files
forage-ctl exec myproject -- ls -la /workspace
start
Start an agent in the sandbox’s tmux session.
forage-ctl start <name> [agent]
Arguments:
| Argument | Description |
|---|---|
<name> | Name of the sandbox |
[agent] | Agent to start (optional, defaults to first agent in template) |
Examples:
# Start the default agent
forage-ctl start myproject
# Start a specific agent (in multi-agent templates)
forage-ctl start myproject claude
forage-ctl start myproject aider
This sends the agent command to the existing tmux session. Use forage-ctl ssh to attach and interact with the agent.
shell
Open a shell in a new tmux window.
forage-ctl shell <name>
Arguments:
| Argument | Description |
|---|---|
<name> | Name of the sandbox |
This creates a new tmux window in the sandbox’s session and attaches to it. Useful for running commands alongside a running agent.
Tmux window navigation:
- Switch windows:
Ctrl-b n(next) /Ctrl-b p(previous) - List windows:
Ctrl-b w - Close window:
exitorCtrl-d
logs
Show container logs.
forage-ctl logs <name> [-f] [-n <lines>]
Arguments:
| Argument | Description |
|---|---|
<name> | Name of the sandbox |
Options:
| Option | Description |
|---|---|
-f, --follow | Follow log output (like tail -f) |
-n, --lines <n> | Number of lines to show (default: 100) |
Examples:
# Show last 100 lines
forage-ctl logs myproject
# Follow logs in real-time
forage-ctl logs myproject -f
# Show last 500 lines
forage-ctl logs myproject -n 500
This uses journalctl to show logs from the container’s systemd services (sshd, tmux, etc.).
reset
Reset a sandbox to fresh state.
forage-ctl reset <name>
Arguments:
| Argument | Description |
|---|---|
<name> | Name of the sandbox |
This destroys and recreates the container while preserving:
- Workspace files
- Sandbox configuration (template, port, network slot)
- JJ workspace association (if applicable)
Use this when:
- The container is in a bad state
- You want a fresh environment
- The agent has polluted the container filesystem
network
Change sandbox network isolation mode.
forage-ctl network <name> <mode> [--allow <host>...] [--no-restart]
Arguments:
| Argument | Description |
|---|---|
<name> | Name of the sandbox |
<mode> | Network mode: full, restricted, or none |
Options:
| Option | Description |
|---|---|
--allow <host> | Additional hosts to allow (restricted mode only) |
--no-restart | Don’t restart sandbox (changes won’t take effect until reset) |
Modes:
| Mode | Description |
|---|---|
full | Unrestricted internet access (default) |
restricted | Only allowed hosts can be accessed |
none | No network access except SSH for management |
Examples:
# Switch to no network
forage-ctl network myproject none
# Switch to restricted with allowed hosts
forage-ctl network myproject restricted --allow api.anthropic.com
gateway
Interactive sandbox selector (gateway mode).
forage-ctl gateway [sandbox-name]
If a sandbox name is provided, connects directly. Otherwise, presents an interactive picker.
This command is designed to be used as a login shell for SSH access, providing a single entry point to all sandboxes.
pick
Interactive sandbox picker.
forage-ctl pick
Opens a TUI for selecting and connecting to sandboxes.
Controls:
- Arrow keys or
j/kto navigate /to filterEnterto connectnto show new sandbox instructionsdto show remove instructionsqorEscto quit
proxy
Start the API proxy server.
forage-ctl proxy [--port <port>] [--host <host>]
Starts an HTTP proxy that injects API keys into requests. Used for sandboxes that need auth injection without storing secrets in the container.
runtime
Show container runtime information.
forage-ctl runtime
Displays the active container runtime and lists available runtimes on the system.
Supported runtimes:
nspawn- NixOS (systemd-nspawn via extra-container)apple- macOS 13+ (Apple Virtualization.framework)podman- Linux, macOS (rootless preferred)docker- Linux, macOS, Windows
gc
Garbage collect orphaned sandbox resources.
forage-ctl gc [--force]
Options:
| Option | Description |
|---|---|
--force | Actually remove orphaned resources (default is dry run) |
This command reconciles disk state with runtime state and removes orphaned resources. Without --force, it performs a dry run showing what would be cleaned.
Detects:
| Type | Description |
|---|---|
| Orphaned files | Sandbox files on disk with no matching container |
| Orphaned containers | Containers in runtime with no matching metadata on disk |
| Stale metadata | Metadata files for sandboxes whose container no longer exists |
Examples:
# Dry run - show what would be cleaned
forage-ctl gc
# Actually clean up orphaned resources
forage-ctl gc --force
Use cases:
- After a system crash that left containers in an inconsistent state
- When manual cleanup left orphaned files
- Periodic maintenance to reclaim disk space
help
Show help message.
forage-ctl help
forage-ctl --help
forage-ctl -h
Exit Codes
| Code | Meaning |
|---|---|
| 0 | Success |
| 1 | General error |
| 2 | Sandbox not found |
| 3 | Template not found |
| 4 | Port/slot allocation failed |
| 5 | Container operation failed |
Environment Variables
| Variable | Default | Description |
|---|---|---|
FORAGE_CONFIG_DIR | /etc/firefly-forage | Configuration directory |
FORAGE_STATE_DIR | /var/lib/firefly-forage | State directory |
Workspace Mounts
Forage supports composable workspace mounts, allowing you to assemble a sandbox’s filesystem from multiple sources. Instead of a single --repo mapped to /workspace, you can mount multiple repositories, overlay branches, and mix VCS-backed and literal bind mounts.
Overview
The traditional single-workspace model mounts one directory at /workspace:
forage-ctl up myproject -t claude --repo ~/projects/myrepo
With composable mounts, a template can declare multiple mount points:
/workspace ← jj workspace from ~/projects/myrepo
/workspace/.beads ← jj workspace (branch beads-sync) from same repo
/workspace/data ← direct bind mount from ~/datasets
No mount is special-cased as “root” — you could have /workspace/proj1 and /workspace/proj2 with nothing at /workspace itself.
Configuring Mounts in Templates
Mounts are declared in your NixOS configuration under workspace.mounts. Each mount is keyed by a stable name:
services.firefly-forage.templates.my-template = {
agents.claude = { ... };
workspace.mounts = {
main = {
containerPath = "/workspace";
mode = "jj";
# repo = null → uses default --repo from CLI
};
data = {
containerPath = "/workspace/data";
repo = "data"; # references --repo data=<path>
mode = "git-worktree";
};
config = {
containerPath = "/workspace/.config";
hostPath = "~/shared-config"; # literal bind mount
readOnly = true;
};
};
};
Mount Options
| Option | Type | Default | Description |
|---|---|---|---|
containerPath | string | (required) | Mount point inside the container |
hostPath | string or null | null | Literal host path for bind mount. Mutually exclusive with repo. |
repo | string or null | null | Repo reference (see Repo Resolution) |
mode | "jj", "git-worktree", "direct", or null | null (auto-detect) | VCS mode for repo-backed mounts |
branch | string or null | null | Branch/ref to check out (VCS mounts only) |
readOnly | bool | false | Mount as read-only |
Repo Resolution
The repo field controls where a mount’s source comes from:
| Value | Behavior |
|---|---|
null or "" | Uses the default (unnamed) --repo value from CLI |
"<name>" | Looks up the named repo from --repo <name>=<path> |
"/absolute/path" | Literal path, no CLI lookup needed |
When a mount specifies hostPath instead of repo, it becomes a direct bind mount — no VCS workspace is created.
Named Repo Parameters
The --repo flag supports both unnamed (default) and named parameters:
# Default repo (used by mounts with repo = null)
forage-ctl up mysandbox -t my-template --repo ~/projects/myrepo
# Default repo + named repo
forage-ctl up mysandbox -t my-template \
--repo ~/projects/myrepo \
--repo data=~/datasets/my-data
# Multiple named repos (no default)
forage-ctl up mysandbox -t my-template \
--repo main=~/projects/myrepo \
--repo data=~/datasets/my-data
The --repo flag is repeatable. Values containing = are parsed as name=path; values without = set the default repo.
When --repo Is Optional
If every mount in the template specifies either hostPath or an absolute repo path, the --repo flag is not required:
workspace.mounts = {
project = {
containerPath = "/workspace";
repo = "/home/user/projects/myrepo"; # absolute path
};
config = {
containerPath = "/workspace/.config";
hostPath = "/etc/shared-config"; # literal bind mount
};
};
# No --repo needed
forage-ctl up mysandbox -t self-contained
Backward Compatibility
Templates without workspace.mounts behave exactly as before — --repo creates a single auto-detected mount at the configured workspace path. All existing workflows continue to work unchanged.
# This still works identically to before
forage-ctl up myproject -t claude --repo ~/projects/myrepo
forage-ctl up myproject -t claude --repo ~/projects/myrepo --direct
VCS Mode Behavior
Each repo-backed mount gets its own VCS workspace:
| Mode | What Happens |
|---|---|
jj | Creates a JJ workspace at the managed path. If branch is set, checks out that branch. |
git-worktree | Creates a git worktree with branch forage-<sandbox>-<mount>. |
direct | Bind mounts the repo path directly (no workspace isolation). |
null (auto-detect) | Detects .jj/ → jj, .git/ → git-worktree, otherwise → direct. |
Managed workspace directories are created under /var/lib/firefly-forage/workspaces/<sandbox>/<mount-name>/, one subdirectory per VCS-backed mount.
useBeads Convenience Option
The workspace.useBeads option provides a shorthand for a common pattern — overlaying a beads workspace:
services.firefly-forage.templates.with-beads = {
agents.claude = { ... };
workspace.mounts.main = {
containerPath = "/workspace";
mode = "jj";
};
workspace.useBeads = {
enable = true;
branch = "beads-sync"; # default
containerPath = "/workspace/.beads"; # default
package = pkgs.beads; # added to extraPackages
# repo = null; # null → inherits default --repo
};
};
When useBeads.enable = true, the Nix module automatically:
- Injects a mount named
beadsintoworkspace.mounts(jj mode, specified branch, atcontainerPath) - Adds the
packagetoextraPackages(if set)
useBeads Options
| Option | Type | Default | Description |
|---|---|---|---|
enable | bool | false | Enable the beads workspace overlay |
branch | string | "beads-sync" | Branch to check out in the beads workspace |
package | package or null | null | Beads package to install in the sandbox |
containerPath | string | "/workspace/.beads" | Mount point inside the container |
repo | string or null | null | Repo reference (null → inherit default --repo) |
Examples
Single Repo with Beads Overlay
The most common multi-mount pattern — a primary workspace with a beads branch overlaid:
templates.claude-beads = {
description = "Claude with beads";
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
workspace.mounts.main = {
containerPath = "/workspace";
mode = "jj";
};
workspace.useBeads = {
enable = true;
package = pkgs.beads;
};
extraPackages = with pkgs; [ ripgrep fd jq ];
};
forage-ctl up agent-a -t claude-beads --repo ~/projects/myrepo
Inside the sandbox:
/workspace/ ← jj workspace (main working copy)
/workspace/.beads/ ← jj workspace (beads-sync branch)
Monorepo with Multiple Services
Mount different parts of a monorepo at different paths:
templates.monorepo = {
description = "Multi-service development";
agents.claude = { ... };
workspace.mounts = {
frontend = {
containerPath = "/workspace/frontend";
repo = "frontend";
mode = "jj";
};
backend = {
containerPath = "/workspace/backend";
repo = "backend";
mode = "jj";
};
shared = {
containerPath = "/workspace/shared";
hostPath = "~/projects/shared-libs";
readOnly = true;
};
};
};
forage-ctl up dev -t monorepo \
--repo frontend=~/projects/frontend \
--repo backend=~/projects/backend
Read-Only Reference Mount
Mount documentation or reference data alongside the workspace:
templates.with-docs = {
agents.claude = { ... };
workspace.mounts = {
main = {
containerPath = "/workspace";
mode = "jj";
};
docs = {
containerPath = "/workspace/reference";
hostPath = "~/docs/api-reference";
readOnly = true;
};
};
};
Mount Validation
Before creating any VCS workspaces, Forage validates the mount configuration:
- Duplicate container paths: Two mounts claiming the same path is an error
- Repo resolution: A mount referencing a named repo not provided via
--repois an error - Source existence:
hostPaththat doesn’t exist orrepopath that isn’t a valid directory is an error - Rollback on failure: If creating a VCS workspace fails partway through, all previously-created workspaces for that sandbox are rolled back
Cleanup
When you remove a sandbox with forage-ctl down, each mount is cleaned up individually:
- VCS-backed mounts (jj, git-worktree): The workspace/worktree is removed via the appropriate VCS command
- Literal bind mounts (
hostPath): No cleanup needed — the host directory is left untouched - Managed directories: The subdirectory under
/var/lib/firefly-forage/workspaces/<sandbox>/is removed
Skill Injection with Multiple Mounts
When a sandbox has multiple mounts, the injected skill file describes the composite layout:
## Workspace Layout
Your workspace contains multiple mount sources:
- /workspace: jj workspace from ~/projects/myrepo
- /workspace/.beads: jj workspace (branch beads-sync) from ~/projects/myrepo
- /workspace/data: direct mount from ~/datasets (read-only)
This gives the agent full context about what’s mounted where and how each mount is managed.
Metadata
Multi-mount sandboxes store mount information in their metadata:
{
"name": "myproject",
"template": "claude-beads",
"workspaceMounts": [
{
"name": "main",
"containerPath": "/workspace",
"hostPath": "/var/lib/firefly-forage/workspaces/myproject/main",
"sourceRepo": "/home/user/projects/myrepo",
"mode": "jj"
},
{
"name": "beads",
"containerPath": "/workspace/.beads",
"hostPath": "/var/lib/firefly-forage/workspaces/myproject/beads",
"sourceRepo": "/home/user/projects/myrepo",
"mode": "jj",
"branch": "beads-sync"
}
]
}
Legacy single-workspace fields (workspace, workspaceMode, sourceRepo) are still populated for backward compatibility with older tooling.
JJ Workspaces
Forage integrates with Jujutsu (jj) to enable multiple agents working on the same repository simultaneously, each with an isolated working copy.
Overview
When you use --repo with a JJ repository (without the --direct flag), Forage:
- Creates a JJ workspace at
/var/lib/forage/workspaces/<name> - Bind mounts this workspace to
/workspacein the container - Bind mounts the source repo’s
.jjdirectory so the workspace symlink resolves
Each sandbox gets its own working copy of the files, but they all share the repository’s operation log and history.
┌─────────────────────────────────────────────────────────────────────┐
│ Host │
│ │
│ ~/projects/myrepo/ │
│ ├── .jj/ ◄─────────────────────────┐ │
│ ├── src/ │ shared │
│ └── ... │ │
│ │ │
│ /var/lib/forage/workspaces/ │ │
│ ├── agent-a/ ◄── jj workspace ───────────┤ │
│ │ ├── src/ (separate working copy) │ │
│ │ └── ... │ │
│ └── agent-b/ ◄── jj workspace ───────────┘ │
│ ├── src/ (separate working copy) │
│ └── ... │
│ │
└─────────────────────────────────────────────────────────────────────┘
Creating JJ Sandboxes
Prerequisites
Your project must be a JJ repository:
cd ~/projects/myrepo
jj git init --colocate # or jj init
Create Multiple Sandboxes
# First agent
forage-ctl up agent-a --template claude --repo ~/projects/myrepo
# Second agent on the same repo
forage-ctl up agent-b --template claude --repo ~/projects/myrepo
# Third agent with a different template
forage-ctl up agent-c --template multi --repo ~/projects/myrepo
Each sandbox appears as a JJ workspace:
jj workspace list -R ~/projects/myrepo
Output:
default: abc123 (no description set)
agent-a: def456 (empty) (no description set)
agent-b: ghi789 (empty) (no description set)
agent-c: jkl012 (empty) (no description set)
Working with JJ Inside Sandboxes
When you connect to a JJ sandbox, the skill injection includes JJ-specific instructions:
forage-ctl ssh agent-a
Inside the sandbox, use JJ commands:
# Show status
jj status
# Show changes
jj diff
# Create a new change
jj new
# Describe your change
jj describe -m "Add feature X"
# See all changes
jj log
Isolation Benefits
Parallel Work
Each agent works on a separate JJ change:
agent-a: Working on feature-auth
agent-b: Working on feature-api
agent-c: Reviewing and testing
Changes don’t interfere—each workspace has its own working copy.
Easy Coordination
From the host, you can see all work:
# See all changes from all workspaces
jj log -R ~/projects/myrepo
# Squash agent work into main
jj squash --from agent-a -R ~/projects/myrepo
Safe Experimentation
If an agent makes a mess:
# Reset just that sandbox
forage-ctl reset agent-a
# Or abandon the change in JJ
jj abandon agent-a -R ~/projects/myrepo
Cleanup
When you remove a JJ sandbox, Forage:
- Runs
jj workspace forget <name> - Removes the workspace directory
- Cleans up container and metadata
forage-ctl down agent-a
The changes made in that workspace remain in the repository history—only the workspace is removed.
Workspace Modes
Forage automatically detects the workspace mode based on the repository type:
| Mode | Condition | Behavior |
|---|---|---|
| Direct | --direct flag used | Mounts directory directly at /workspace |
| JJ workspace | Path contains .jj/ | Creates isolated JJ workspace |
| Git worktree | Path contains .git/ | Creates git worktree with branch forage-<name> |
Comparison
| Aspect | Direct (--direct) | JJ workspace | Git worktree |
|---|---|---|---|
| Working directory | Direct bind mount | JJ workspace | Git worktree |
| Multiple sandboxes | Need separate directories | Share same repo | Share same repo |
| Isolation | File-level (same files) | Change-level (JJ) | Branch-level (git) |
| VCS | Any (git, jj, etc.) | JJ only | Git only |
| Cleanup | Removes skill files | Forgets JJ workspace | Removes git worktree |
Use --direct when:
- Simple single-agent workflow
- Project doesn’t use JJ or git
- You want direct file access without VCS isolation
Use JJ repos (auto-detected) when:
- Multiple agents on same codebase
- You want change isolation
- Project uses JJ for version control
Use Git repos (auto-detected) when:
- Multiple agents on same git repository
- Each agent works on a separate branch (auto-created as
forage-<name>)
Composable JJ Mounts
With workspace mounts, you can create multiple JJ workspaces within a single sandbox. A common pattern is overlaying a beads branch alongside the main workspace:
templates.claude-beads = {
agents.claude = { ... };
workspace.mounts.main = {
containerPath = "/workspace";
mode = "jj";
};
workspace.useBeads = {
enable = true;
package = pkgs.beads;
};
};
forage-ctl up agent-a -t claude-beads --repo ~/projects/myrepo
This creates two JJ workspaces from the same repository:
/workspace— the main working copy/workspace/.beads— checking out thebeads-syncbranch
Each mount gets its own managed workspace directory under /var/lib/firefly-forage/workspaces/<sandbox>/<mount-name>/.
You can also mount JJ workspaces from different repositories using named repos:
forage-ctl up dev -t multi-repo \
--repo ~/projects/frontend \
--repo backend=~/projects/backend
See Workspace Mounts for the full guide.
Troubleshooting
“Not a jj repository”
The path doesn’t contain a .jj/repo directory:
# Initialize JJ
cd ~/projects/myrepo
jj git init --colocate
“JJ workspace already exists”
A workspace with that name already exists in the repo:
# Check existing workspaces
jj workspace list -R ~/projects/myrepo
# Use a different sandbox name, or remove the existing workspace
jj workspace forget existingname -R ~/projects/myrepo
JJ commands fail inside sandbox
Ensure the source repo’s .jj directory is accessible. The sandbox needs the bind mount to resolve the workspace symlink. This should be automatic—if it’s not working, check:
# Inside sandbox
ls -la /workspace/.jj/
# Should show a symlink to the repo's .jj directory
Skill Injection
Forage automatically injects “skills”—configuration files that teach AI agents about the sandbox environment and available tools.
How It Works
When a sandbox is created, Forage generates .claude/forage-skills.md in the workspace directory. This file is automatically loaded by Claude Code alongside any existing project instructions.
workspace/
├── .claude/
│ ├── forage-skills.md ◄── Injected by Forage
│ └── settings.json ◄── Your project settings (untouched)
├── CLAUDE.md ◄── Your project instructions (untouched)
└── src/
Injected Content
The generated skill file includes:
Environment Information
# Forage Sandbox Skills
You are running inside a Firefly Forage sandbox named `myproject`.
## Environment
- **Workspace**: `/workspace` (your working directory)
- **Network**: Full internet access
- **Session**: tmux session `forage` (persistent across reconnections)
Available Agents
Lists the agents configured in the template:
## Available Agents
claude
JJ Instructions (if applicable)
For sandboxes created with --repo:
## Version Control: JJ (Jujutsu)
This workspace uses `jj` for version control:
- `jj status` - Show working copy status
- `jj diff` - Show changes
- `jj new` - Create new change
- `jj describe -m ""` - Set commit message
- `jj bookmark set` - Update bookmark
This is an isolated jj workspace - changes don't affect other workspaces.
Sandbox Constraints
## Sandbox Constraints
- The root filesystem is ephemeral (tmpfs) - changes outside /workspace are lost on restart
- `/nix/store` is read-only (shared from host)
- `/workspace` is your persistent working directory
- Secrets are mounted read-only at `/run/secrets/`
Nix Usage
## Installing Additional Tools
Any tool not pre-installed can be used via Nix:
- `nix run nixpkgs#ripgrep -- --help` - Run a tool once
- `nix shell nixpkgs#jq nixpkgs#yq` - Enter a shell with multiple tools
- `nix run github:owner/repo` - Build and run a flake
This works because `/nix/store` is shared (read-only) and the Nix daemon
handles all builds on the host.
Tips and Sub-Agent Information
## Tips
- Use `tmux` for long-running processes
- All project work should be done in `/workspace`
- The sandbox can be reset with `forage-ctl reset myproject` from the host
## Sub-Agent Spawning
When spawning sub-agents (e.g., with Claude Code's Task tool):
- Sub-agents share this same sandbox environment
- Use tmux windows/panes for parallel agent work
- Each sub-agent has access to the same workspace and tools
Skill Priority
Claude Code loads instructions in this order:
- Project CLAUDE.md - Your existing project instructions (highest priority)
- Forage skills - Injected
.claude/forage-skills.md - User settings - From
.claude/settings.json
The Forage skills supplement rather than override your project documentation.
Cleanup
When a sandbox is removed with forage-ctl down:
- Direct mode (
--workspace): The skill file is removed from the workspace - JJ mode (
--repo): The entire workspace directory is removed, including skills - Git worktree mode (
--git-worktree): The worktree is removed, including skills
Composite Workspace Layout
For sandboxes with composable workspace mounts, the skill file includes a description of the full mount layout:
## Workspace Layout
Your workspace contains multiple mount sources:
- /workspace: jj workspace from ~/projects/myrepo
- /workspace/.beads: jj workspace (branch beads-sync) from ~/projects/myrepo
- /workspace/data: direct mount from ~/datasets (read-only)
This gives the agent context about what’s mounted where and how each path is managed.
Dynamic Skill Generation
Skills are dynamically generated based on project analysis. The skills analyzer (internal/skills/analyzer.go) detects:
- Project type: Go, Rust, Python, Node/TypeScript, Nix, and more
- Build system: detected build commands (e.g.,
go build,cargo build,npm run build) - Test commands: detected test runners (e.g.,
go test ./...,cargo test,pytest) - Frameworks: detected web frameworks and libraries
- VCS: Git or JJ repository detection
Based on detection results, the injected skill content includes project-specific guidance for the agent (build/test commands, VCS workflow tips, etc.).
Architecture
Forage uses NixOS containers (systemd-nspawn) to create isolated environments for AI agents.
System Overview
┌─────────────────────────────────────────────────────────────────┐
│ Host Machine │
│ │
│ nix-daemon ◄──────────────────────────────┐ │
│ │ │ │
│ ▼ │ │
│ /nix/store ◄──────────────────────────────┼───────────┐ │
│ (writable by daemon) │ │ │
│ │ │ │
│ ┌─────────────────────────────┐ ┌────────┴───────────┴──┐ │
│ │ sandbox-project-a │ │ sandbox-project-b │ │
│ │ │ │ │ │
│ │ /nix/store (ro bind) │ │ /nix/store (ro bind) │ │
│ │ /nix/var/nix/daemon-socket │ │ /nix/var/nix/daemon.. │ │
│ │ /workspace ──► ~/proj-a │ │ /workspace ──► ~/pr.. │ │
│ │ /run/secrets (ro bind) │ │ /run/secrets (ro ..) │ │
│ │ │ │ │ │
│ │ agent: claude │ │ agents: claude, aider │ │
│ │ sshd :22 ──► host:2200 │ │ sshd :22 ──► host:22. │ │
│ └─────────────────────────────┘ └───────────────────────┘ │
│ │
│ forage-ctl (CLI) │
│ │
└─────────────────────────────────────────────────────────────────┘
Components
Host Module
The NixOS module (services.firefly-forage) configures:
- Template definitions
- Secret paths
- Port ranges
- User identity mapping
- System directories via tmpfiles
forage-ctl
The CLI tool that:
- Creates/destroys containers using extra-container
- Manages SSH port allocation
- Handles JJ workspace lifecycle
- Injects skill files
extra-container
extra-container manages the systemd-nspawn containers. It allows creating NixOS containers without modifying the host’s /etc/nixos configuration.
Containers
Each sandbox is a systemd-nspawn container with:
- Ephemeral root: tmpfs filesystem, lost on restart
- Private network: Virtual ethernet with NAT to host
- Bind mounts: Nix store, workspace, secrets
- SSH server: For external access
- Tmux session: For session persistence
Data Flow
Container Creation
forage-ctl up myproject -t claude -w ~/project
│
├─► Find available port (2200-2299)
├─► Find available network slot (192.168.100.x)
├─► Copy secrets to /run/forage-secrets/myproject/
├─► Inject skills to ~/project/.claude/forage-skills.md
├─► Generate container Nix configuration
├─► Call extra-container create --start
└─► Wait for SSH to become available
Container Configuration
The generated Nix configuration includes:
containers."forage-myproject" = {
ephemeral = true;
privateNetwork = true;
hostAddress = "192.168.100.1";
localAddress = "192.168.100.11";
forwardPorts = [{
containerPort = 22;
hostPort = 2200;
protocol = "tcp";
}];
bindMounts = {
"/nix/store" = { hostPath = "/nix/store"; isReadOnly = true; };
"/workspace" = { hostPath = "/home/user/project"; isReadOnly = false; };
"/run/secrets" = { hostPath = "/run/forage-secrets/myproject"; isReadOnly = true; };
};
config = { ... }: {
# Container NixOS configuration
services.openssh.enable = true;
users.users.agent = { ... };
environment.systemPackages = [ ... ];
};
};
Network Architecture
┌─────────────────────────────────────────────────┐
│ Host │
│ │
│ ┌─────────────┐ │
│ │ NAT Gateway │ 192.168.100.1 │
│ └──────┬──────┘ │
│ │ │
│ ┌────┴────┬────────────┐ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ .11 .12 .13 │
│ sandbox-a sandbox-b sandbox-c │
│ :2200 :2201 :2202 │
│ │
└─────────────────────────────────────────────────┘
Each sandbox:
- Gets a unique IP in the 192.168.100.0/24 range
- Has SSH port forwarded from host
- Uses host’s DNS resolution
State Management
Metadata Files
Sandbox metadata is stored in JSON files:
/var/lib/firefly-forage/sandboxes/myproject.json
{
"name": "myproject",
"template": "claude",
"port": 2200,
"workspace": "/home/user/project",
"networkSlot": 1,
"createdAt": "2024-01-15T10:30:00+00:00",
"workspaceMode": "direct"
}
For JJ workspaces, additional fields:
{
"workspaceMode": "jj",
"sourceRepo": "/home/user/repos/myrepo",
"jjWorkspaceName": "myproject"
}
For sandboxes with composable workspace mounts, the workspaceMounts field replaces the single-workspace fields:
{
"workspaceMounts": [
{
"name": "main",
"containerPath": "/workspace",
"hostPath": "/var/lib/firefly-forage/workspaces/myproject/main",
"sourceRepo": "/home/user/repos/myrepo",
"mode": "jj"
},
{
"name": "beads",
"containerPath": "/workspace/.beads",
"hostPath": "/var/lib/firefly-forage/workspaces/myproject/beads",
"sourceRepo": "/home/user/repos/myrepo",
"mode": "jj",
"branch": "beads-sync"
}
]
}
Directories
| Path | Purpose |
|---|---|
/etc/firefly-forage/ | Configuration and templates |
/var/lib/firefly-forage/sandboxes/ | Sandbox metadata |
/var/lib/firefly-forage/workspaces/ | JJ workspace directories |
/run/forage-secrets/ | Runtime secrets (tmpfs) |
Security Boundaries
┌─────────────────────────────────────────────────────────────────┐
│ Trusted Zone (Host) │
│ │
│ - NixOS configuration │
│ - Nix daemon │
│ - Secret files │
│ - forage-ctl │
│ │
├─────────────────────────────────────────────────────────────────┤
│ Isolation Boundary (systemd-nspawn) │
├─────────────────────────────────────────────────────────────────┤
│ Untrusted Zone (Container) │
│ │
│ - AI agent code │
│ - User workspace (read-write) │
│ - Agent-installed packages │
│ │
│ Limited access to: │
│ - /nix/store (read-only) │
│ - /run/secrets (read-only) │
│ - Network (configurable) │
│ │
└─────────────────────────────────────────────────────────────────┘
Templates
Templates are declarative specifications for sandbox environments. They define which agents are available, what packages are installed, and how the sandbox can access the network.
Template Structure
services.firefly-forage.templates.<name> = {
description = "Human-readable description";
agents = {
<agent-name> = {
package = <derivation>;
secretName = "<secret-key>";
authEnvVar = "<ENV_VAR_NAME>";
};
};
extraPackages = [ ... ];
network = "full" | "restricted" | "none";
allowedHosts = [ ... ]; # for restricted mode
initCommands = [ ... ]; # commands to run after creation
workspace.mounts = { ... }; # composable workspace mounts (optional)
workspace.useBeads = { ... }; # beads overlay shorthand (optional)
};
Components
Description
A human-readable description shown by forage-ctl templates:
description = "Claude Code with development tools";
Agents
Agents are AI coding tools that will be available in the sandbox. Each agent needs:
| Field | Description |
|---|---|
package | Nix derivation for the agent |
secretName | Key in services.firefly-forage.secrets |
authEnvVar | Environment variable for authentication |
hostConfigDir | Host directory to mount for persistent config (optional) |
containerConfigDir | Override container mount point (optional) |
hostConfigDirReadOnly | Mount config dir as read-only (default: false) |
permissions | Agent permission rules (optional, see below) |
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
Forage creates a wrapper script that:
- Reads the secret from
/run/secrets/<secretName> - Sets the environment variable
- Executes the real agent binary
Permissions
The permissions option controls what actions agents can take without prompting. When set, Forage generates a settings file that is bind-mounted read-only into the container.
| Field | Description |
|---|---|
skipAll | Bypass all permission checks (grants all tool families) |
allow | List of permission rules to auto-approve |
deny | List of permission rules to always block |
skipAll cannot be combined with allow or deny.
Full autonomy (no permission prompts):
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
permissions.skipAll = true;
};
Granular allowlist:
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
permissions = {
allow = [ "Read" "Glob" "Grep" "Edit(src/**)" "Bash(npm run *)" ];
deny = [ "Bash(rm -rf *)" ];
};
};
For Claude, the settings file is written to /etc/claude-code/managed-settings.json (managed scope — highest precedence, cannot be overridden by user or project settings). permissions and hostConfigDir can coexist — they target different paths.
Extra Packages
Additional packages available in the sandbox:
extraPackages = with pkgs; [
ripgrep
fd
jq
yq
tree
htop
git
];
These are added to environment.systemPackages in the container.
Init Commands
Shell commands to run inside the container after creation. These execute after SSH is ready, as the container user in the workspace directory. Failures are logged as warnings but do not block sandbox creation.
initCommands = [
"npm install"
"pip install pytest"
];
Commands execute in order via sh -c. Each command runs independently — a failing command does not prevent subsequent commands from running.
Per-Project Init Script
In addition to template-level initCommands, you can place a .forage/init script in your repository. If present, it runs automatically after template init commands complete.
# .forage/init — runs inside the container after creation
#!/bin/sh
jj git fetch
jj new main
Execution order:
- Template
initCommands(in declaration order) .forage/initscript (if present in workspace)
Example: Beads Setup
templates.beads = {
description = "Beads development sandbox";
agents.claude = {
package = pkgs.claude-code;
hostConfigDir = "~/.claude";
permissions.skipAll = true;
};
extraPackages = with pkgs; [ git nodejs ];
initCommands = [
"npm install -g beads"
];
};
Combined with a .forage/init in the repo:
#!/bin/sh
git fetch origin beads-sync
git checkout -b beads-sync origin/beads-sync 2>/dev/null || true
Network Mode
Controls network access:
| Mode | Description |
|---|---|
full | Unrestricted internet access (default) |
restricted | Only allowed hosts can be accessed |
none | No network access |
network = "full";
For restricted mode:
network = "restricted";
allowedHosts = [
"api.anthropic.com"
"api.openai.com"
];
You can also change network modes at runtime using forage-ctl network.
Workspace Mounts
Templates can declare composable workspace mounts — multiple mount points assembled from different sources:
workspace.mounts = {
main = {
containerPath = "/workspace";
mode = "jj";
# repo = null → uses default --repo
};
data = {
containerPath = "/workspace/data";
repo = "data"; # references --repo data=<path>
readOnly = true;
};
};
When workspace.mounts is set, the --repo flag becomes optional (if all mounts specify their sources). See the Workspace Mounts usage guide for full details.
Beads Overlay (useBeads)
A convenience option for overlaying a beads workspace:
workspace.useBeads = {
enable = true;
branch = "beads-sync"; # default
containerPath = "/workspace/.beads"; # default
package = pkgs.beads; # added to extraPackages
};
This automatically injects a jj mount and the beads package. See Workspace Mounts: useBeads.
Example Templates
Minimal Claude Template
templates.claude = {
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
};
Full-Featured Development Template
templates.claude-dev = {
description = "Claude Code with full development tooling";
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
extraPackages = with pkgs; [
# Search and navigation
ripgrep
fd
fzf
tree
# Data processing
jq
yq
miller
# Development
git
gh
gnumake
nodejs
# Debugging
htop
strace
lsof
];
network = "full";
};
Multi-Agent Template
templates.multi = {
description = "Multiple AI assistants";
agents = {
claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
aider = {
package = pkgs.aider-chat;
secretName = "openai";
authEnvVar = "OPENAI_API_KEY";
};
};
extraPackages = with pkgs; [ ripgrep fd git ];
};
Autonomous Template
templates.claude-auto = {
description = "Claude Code with full autonomy";
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
permissions.skipAll = true;
};
network = "full";
};
Multi-Mount Template with Beads
templates.claude-beads = {
description = "Claude with beads overlay";
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
workspace.mounts.main = {
containerPath = "/workspace";
mode = "jj";
};
workspace.useBeads = {
enable = true;
package = pkgs.beads;
};
extraPackages = with pkgs; [ ripgrep fd jq ];
};
Air-Gapped Template
templates.isolated = {
description = "No network access for sensitive work";
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
network = "none";
};
Template Selection
List available templates:
forage-ctl templates
Output:
TEMPLATE AGENTS NETWORK DESCRIPTION
claude claude full Claude Code sandbox
claude-dev claude full Claude Code with full development tooling
multi claude,aider full Multiple AI assistants
isolated claude none No network access for sensitive work
Use a template when creating a sandbox:
forage-ctl up myproject --template claude-dev --workspace ~/projects/myproject
How Templates Are Processed
-
At NixOS build time: Templates are converted to JSON files in
/etc/firefly-forage/templates/ -
At sandbox creation:
forage-ctlreads the template JSON and generates a container configuration -
Agent wrappers: For each agent, a wrapper script is generated that injects authentication
The template JSON format:
{
"name": "claude",
"description": "Claude Code sandbox",
"network": "full",
"allowedHosts": [],
"agents": {
"claude": {
"packagePath": "/nix/store/...-claude-code",
"secretName": "anthropic",
"authEnvVar": "ANTHROPIC_API_KEY",
"permissions": { "skipAll": true }
}
},
"extraPackages": [
"/nix/store/...-ripgrep",
"/nix/store/...-fd"
]
}
When workspace.mounts is configured, the JSON includes a workspaceMounts field:
{
"workspaceMounts": {
"main": {
"containerPath": "/workspace",
"mode": "jj"
},
"beads": {
"containerPath": "/workspace/.beads",
"mode": "jj",
"branch": "beads-sync"
}
}
}
The permissions field is null when not configured. When set, it can contain:
{"skipAll": true}— grants all tool families{"allow": [...], "deny": [...]}— granular rules
Agent Wrappers
Agent wrappers are generated scripts that inject authentication and execute the actual agent binary. They provide a layer of auth obfuscation.
How Wrappers Work
┌─────────────────────────────────────────────────────────┐
│ Container │
│ │
│ $ claude chat "hello" │
│ │ │
│ ▼ │
│ /usr/bin/claude (wrapper) │
│ │ │
│ ├─► read /run/secrets/anthropic-api-key │
│ ├─► export ANTHROPIC_API_KEY="sk-..." │
│ └─► exec /nix/store/.../bin/claude "$@" │
│ │
└─────────────────────────────────────────────────────────┘
The wrapper:
- Reads the API key from a file (not environment)
- Sets the environment variable only for the child process
- Executes the real agent binary with all arguments
Generated Wrapper Code
For each agent defined in a template:
agents.claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
Forage generates:
#!/usr/bin/env bash
if [ -f "/run/secrets/anthropic" ]; then
export ANTHROPIC_API_KEY="$(cat /run/secrets/anthropic)"
fi
exec /nix/store/abc123-claude-code/bin/claude "$@"
This wrapper is added to the container’s environment.systemPackages.
Security Properties
What Wrappers Protect Against
- Environment snooping: The API key isn’t in the global environment
- Process listing:
ps auxwon’t show the key - Casual discovery: Agent can’t just
echo $ANTHROPIC_API_KEY
What Wrappers Don’t Protect Against
- Determined agents: An agent could read
/run/secrets/directly - Memory inspection: The key is in the process memory
- Network interception: Keys are sent to APIs
Wrappers provide obfuscation, not security. They make it harder for an agent to accidentally discover credentials, but a malicious agent could still find them.
Secret Mounting
Secrets flow from host to container:
Host:
/run/secrets/anthropic-api-key (from sops/agenix)
│
▼
/run/forage-secrets/myproject/anthropic (copied at sandbox creation)
│
▼
Container:
/run/secrets/anthropic (bind mounted, read-only)
The secrets directory is:
- Created fresh for each sandbox
- Bind-mounted read-only into the container
- Cleaned up when the sandbox is destroyed
Multiple Agents
Templates can define multiple agents:
agents = {
claude = {
package = pkgs.claude-code;
secretName = "anthropic";
authEnvVar = "ANTHROPIC_API_KEY";
};
aider = {
package = pkgs.aider-chat;
secretName = "openai";
authEnvVar = "OPENAI_API_KEY";
};
};
Each gets its own wrapper, and both are available in the container:
# Inside container
claude --help
aider --help
Wrapper vs Direct Execution
| Aspect | Wrapper | Direct |
|---|---|---|
| Auth source | File read at runtime | Environment variable |
| Auth visibility | Hidden from environment | Visible in env |
| Setup required | Automatic | Manual export |
| Works outside sandbox | No | Yes (with manual setup) |
Future: API Bridge
A more secure approach (planned for Phase 5) would remove secrets from containers entirely:
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Sandbox │ │ API Bridge │ │ External APIs │
│ │ │ (on host) │ │ │
│ claude-wrapper ─┼────►│ - Auth injection │────►│ api.anthropic. │
│ (no secrets) │ │ - Rate limiting │ │ │
│ │ │ - Audit logs │ │ │
└─────────────────┘ └──────────────────┘ └─────────────────┘
With an API bridge:
- Secrets never enter the container
- All API calls are logged
- Rate limiting is enforced
- Requests can be filtered/modified
Nix Store Sharing
Forage sandboxes share the host’s nix store, avoiding duplication while maintaining isolation.
How It Works
The nix store is bind-mounted read-only into each container:
bindMounts = {
"/nix/store" = {
hostPath = "/nix/store";
isReadOnly = true;
};
};
When an agent needs to install packages, they go through the host’s nix daemon:
┌─────────────────────────────────────────────────────────────────┐
│ Host │
│ │
│ nix-daemon ◄──────────────────────────────┐ │
│ │ │ │
│ ▼ │ │
│ /nix/store ◄──────────────────────────────┼───────────┐ │
│ (writable by daemon) │ │ │
│ │ │ │
│ ┌─────────────────────────────┐ ┌────────┴───────────┴──┐ │
│ │ sandbox-a │ │ sandbox-b │ │
│ │ │ │ │ │
│ │ /nix/store (read-only) │ │ /nix/store (read-only)│ │
│ │ │ │ │ │
│ │ $ nix run nixpkgs#ripgrep │ │ $ nix shell nixpkgs#jq│ │
│ │ │ │ │ │ │ │
│ │ └─────────────────────┼──┼───────┘ │ │
│ │ │ │ │ │
│ └─────────────────────────────┘ └───────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
Why This Works
-
Read-only detection: When
/nix/storeis read-only, the nix client detects it can’t write directly -
Daemon mode: The client automatically switches to daemon mode and communicates via socket
-
Host builds: The nix daemon on the host performs the actual builds and writes to the store
-
Instant visibility: Since the container bind-mounts the same store, new paths are immediately visible
-
Content-addressed: Nix’s content-addressed store means there are no conflicts—the same input always produces the same output path
Benefits
No Duplication
Without store sharing, each container would need its own copy of:
- Base system packages
- Development tools
- Agent binaries
With sharing, the store is used efficiently:
Without sharing:
Container A: /nix/store/...-ripgrep-14.0.0 (15MB)
Container B: /nix/store/...-ripgrep-14.0.0 (15MB)
Container C: /nix/store/...-ripgrep-14.0.0 (15MB)
Total: 45MB
With sharing:
Host: /nix/store/...-ripgrep-14.0.0 (15MB)
Container A, B, C: bind mount (0MB additional)
Total: 15MB
Instant Availability
Packages already in the host store are immediately available:
# Inside container - if ripgrep is already on host
$ nix run nixpkgs#ripgrep -- --version
ripgrep 14.0.0
# (instant, no download/build)
Shared Build Cache
If one container builds a package, others can use it:
# Container A builds a package
$ nix build nixpkgs#somePackage
# Container B can use it immediately (same store path)
$ nix run nixpkgs#somePackage
# (no rebuild needed)
Using Nix in Sandboxes
One-Off Commands
# Run a tool without installing
nix run nixpkgs#ripgrep -- --help
nix run nixpkgs#jq -- '.foo' data.json
Interactive Shell
# Enter a shell with multiple tools
nix shell nixpkgs#nodejs nixpkgs#yarn nixpkgs#typescript
# Now node, yarn, tsc are available
node --version
Building Projects
# Build a flake-based project
cd /workspace
nix build
# Run the result
./result/bin/myapp
Development Shells
# Enter a project's dev shell
cd /workspace
nix develop
# Or with direnv (if project has .envrc)
direnv allow
Limitations
No Direct Store Writes
Containers cannot write directly to /nix/store:
# This won't work
$ nix-store --add myfile
error: cannot open `/nix/store/.../myfile' for writing: Read-only file system
All writes must go through the daemon.
Daemon Socket Required
The nix daemon socket must be accessible. This is handled by systemd-nspawn’s socket activation.
Store Garbage Collection
Garbage collection happens on the host. If the host runs nix-collect-garbage, it may remove paths that containers are using.
Best practice: Don’t run aggressive garbage collection while sandboxes are active.
Registry Pinning
Forage automatically pins the nix registry in each sandbox to match the host’s nixpkgs version. This ensures consistency across all nix run nixpkgs#foo and nix shell commands.
How It Works
The host module extracts the nixpkgs revision from its flake inputs and passes it to each container. The container’s /etc/nix/registry.json is configured to resolve nixpkgs to this specific revision:
{
"version": 2,
"flakes": [{
"from": { "type": "indirect", "id": "nixpkgs" },
"to": {
"type": "github",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "abc123..."
}
}]
}
Benefits
- Consistency: All sandboxes use the same nixpkgs version
- No store bloat: Packages aren’t duplicated across nixpkgs versions
- Reproducibility: Tool installations are reproducible across sandboxes
- Cache efficiency: If the host already has a package, it’s instantly available
Verification
Inside a sandbox, you can verify the pinning:
# Show the registry
nix registry list
# The nixpkgs entry should show the pinned revision
# nixpkgs flake:nixpkgs github:NixOS/nixpkgs/<rev>
Security
Forage provides isolation for AI agents, but it’s important to understand the threat model and limitations.
Threat Model
Trusted
- Host system administrator
- Nix store contents (from nixpkgs/trusted sources)
- Forage module configuration
Untrusted
- AI agent behavior
- Code being worked on in workspace
- Packages installed by agents at runtime
Security Layers
Container Isolation
Sandboxes use systemd-nspawn containers:
- Separate PID namespace
- Separate network namespace
- Separate mount namespace
- Resource limits (cgroups)
- Ephemeral root filesystem
Filesystem Isolation
| Path | Access | Notes |
|---|---|---|
/ | Read-write | Ephemeral (tmpfs), lost on restart |
/nix/store | Read-only | Shared from host |
/workspace | Read-write | Bind-mounted from host |
/run/secrets | Read-only | API keys and credentials |
Agents can only persistently modify files in /workspace.
Network Isolation
| Mode | Description |
|---|---|
full | Unrestricted internet access |
restricted | Allowlist of specific hosts |
none | No network access |
Even with network = "none", containers can communicate with the nix daemon socket.
Auth Obfuscation
API keys are:
- Stored in files, not environment variables
- Read at runtime by wrapper scripts
- Set only for the agent process
This makes casual credential discovery harder, but doesn’t prevent a determined agent from reading /run/secrets/.
Mitigations
| Threat | Mitigation |
|---|---|
| Agent exfiltrates API keys | API proxy (keeps secrets on host); obfuscation via wrappers (UX convenience, not a security boundary) |
| Agent accesses host filesystem | Container isolation, explicit bind mounts only |
| Agent makes unwanted network calls | Network isolation modes |
| Agent runs dangerous commands | Permission rules (allow/deny) via managed settings |
| Agent corrupts system state | Ephemeral root, easy reset |
| Agent fills disk | Ephemeral tmpfs has size limits |
| Agent escapes container | systemd-nspawn security features |
Limitations
Auth Obfuscation Is Not Foolproof
A determined agent could:
- Read files in
/run/secrets/directly - Inspect its own process memory
- Intercept API calls
Wrappers provide obfuscation, not security. They stop casual discovery, not intentional exfiltration.
Container Escape Vulnerabilities
systemd-nspawn is not a security boundary like a VM. Kernel vulnerabilities could allow container escape. For high-security scenarios, consider:
- Running sandboxes in VMs
- Additional seccomp filtering
- SELinux/AppArmor policies
DNS Resolution Timing
In restricted mode, allowed host IPs are resolved at sandbox creation time and baked into nftables rules. If a host’s IPs change (e.g., CDN rotation), the rules become stale and connectivity may break until the sandbox is reconfigured with forage-ctl network.
Network Exfiltration
Even with network = "none", agents could potentially:
- Encode data in DNS queries (if DNS is available)
- Use timing side channels
- Embed data in legitimate API calls
Workspace Access
Agents have full read-write access to /workspace. They could:
- Modify or delete project files
- Read sensitive files in the project
- Create files that execute on the host
Best Practices
Secret Management
# Use proper secret management (sops-nix, agenix)
secrets = {
anthropic = config.sops.secrets.anthropic-api-key.path;
};
# Don't hardcode secrets
# BAD: secrets = { anthropic = "/home/user/.secrets/key"; };
Template Design
# Minimize installed packages
extraPackages = with pkgs; [ ripgrep fd ];
# Don't include: curl, wget, netcat, etc. unless needed
# Use network isolation when possible
network = "none"; # For tasks that don't need network
# Use granular permissions instead of skipAll when possible
agents.claude.permissions = {
allow = [ "Read" "Glob" "Grep" "Edit(src/**)" ];
deny = [ "Bash(rm -rf *)" ];
};
Agent Permissions
Use the most restrictive permissions that still allow the agent to do its job:
- Prefer granular
allow/denyoverskipAll - Use
denyrules to block dangerous patterns even when allowing broad tool access skipAllis convenient for trusted development workflows but grants full tool access
Workspace Hygiene
- Don’t put sensitive files (SSH keys, credentials) in project directories
- Use
.gitignore/.jjignoreto exclude sensitive patterns - Review agent-created files before committing
Regular Resets
# Reset sandbox periodically to clear accumulated state
forage-ctl reset myproject
Monitor Agent Activity
- Review files modified by agents
- Check git/jj history for unexpected changes
- Monitor network traffic if concerned
Additional Security Features
API Proxy
The forage-ctl proxy command starts an HTTP proxy that:
- Keeps secrets on the host, never in containers
- Injects API keys into requests at runtime
- Can log all API calls for audit
- Enables rate limiting and request filtering
Future Security Enhancements
Syscall Filtering
Additional seccomp profiles to restrict:
- Dangerous syscalls
- Network operations
- File operations outside allowed paths
Read-Only Workspace Mode
For review tasks where the agent shouldn’t modify files:
templates.review = {
readOnlyWorkspace = true;
# ...
};
This is implemented and enforces filesystem-level read-only mounting of /workspace.
Reporting Security Issues
If you discover a security vulnerability in Forage, please report it responsibly:
- Do not open a public issue
- Email security concerns to the maintainers
- Allow time for a fix before public disclosure
See the project repository for contact information.
Troubleshooting
Common issues and their solutions.
Installation Issues
“Host configuration not found”
✗ Host configuration not found: /etc/firefly-forage/config.json
ℹ Is firefly-forage enabled in your NixOS configuration?
Cause: The Forage module isn’t enabled or the system hasn’t been rebuilt.
Solution:
services.firefly-forage.enable = true;
Then rebuild:
sudo nixos-rebuild switch
“Templates directory not found”
✗ Templates directory not found: /etc/firefly-forage/templates
Cause: No templates are defined in the configuration.
Solution: Add at least one template:
services.firefly-forage.templates.claude = {
agents.claude = { ... };
};
Sandbox Creation Issues
“Template not found”
✗ Template not found: mytemplate
Cause: The specified template doesn’t exist.
Solution: List available templates:
forage-ctl templates
“Workspace directory does not exist”
✗ Workspace directory does not exist: /path/to/project
Cause: The path doesn’t exist or is misspelled.
Solution: Create the directory or check the path:
mkdir -p ~/projects/myproject
forage-ctl up myproject -t claude -w ~/projects/myproject
“Not a jj repository”
✗ Not a jj repository: /path/to/repo
ℹ Initialize with: jj git init
Cause: Using --repo with a directory that isn’t a JJ repository.
Solution: Initialize JJ:
cd /path/to/repo
jj git init --colocate
“JJ workspace already exists”
✗ JJ workspace 'myname' already exists in /path/to/repo
Cause: A JJ workspace with that name already exists.
Solution: Use a different sandbox name, or remove the existing workspace:
jj workspace forget myname -R /path/to/repo
“No available ports”
✗ No available ports in range 2200-2299
Cause: All ports in the configured range are in use.
Solution:
- Remove unused sandboxes:
forage-ctl down <name> - Increase the port range in configuration:
services.firefly-forage.portRange = {
from = 2200;
to = 2399; # Expanded range
};
“Failed to create container”
✗ Failed to create container
Cause: extra-container or systemd-nspawn failed.
Solution: Check system logs:
journalctl -u container@forage-myproject -n 50
Common causes:
- Insufficient permissions (run as root)
- Resource constraints
- Conflicting container names
Connection Issues
SSH Connection Refused
ssh: connect to host localhost port 2200: Connection refused
Cause: Container isn’t running or SSH isn’t ready.
Solution:
- Check sandbox status:
forage-ctl ps
- If stopped, the container may have failed. Check logs:
journalctl -u container@forage-myproject
- Try resetting:
forage-ctl reset myproject
SSH Timeout
ℹ Waiting for SSH to become available on port 2200...
✗ Timeout waiting for SSH (60s)
Cause: Container is starting slowly or SSH failed to start.
Solution: The container may still be starting. Wait and try:
forage-ctl ssh myproject
If it persists, check container logs:
machinectl status forage-myproject
journalctl -M forage-myproject -u sshd
Permission Denied (SSH)
agent@localhost: Permission denied (publickey).
Cause: SSH key not authorized.
Solution: Ensure your key is in the configuration:
services.firefly-forage.authorizedKeys = [
"ssh-ed25519 AAAA..."
];
Or use your user’s keys:
services.firefly-forage.authorizedKeys =
config.users.users.myuser.openssh.authorizedKeys.keys;
Runtime Issues
Agent Authentication Fails
Error: Invalid API key
Cause: Secret file is missing or has wrong content.
Solution:
- Check the secret path in configuration
- Verify the secret file exists and has correct content
- Check sandbox secrets:
forage-ctl exec myproject -- cat /run/secrets/anthropic
“Command not found” for Agent
bash: claude: command not found
Cause: Agent wrapper wasn’t created or PATH issue.
Solution:
- Check the template defines the agent correctly
- Verify the package path exists:
forage-ctl exec myproject -- ls -la /nix/store/*claude*
Workspace Permission Issues
Permission denied: /workspace/file
Cause: UID mismatch between container and host.
Solution: Ensure services.firefly-forage.user matches the owner of workspace files:
services.firefly-forage.user = "myuser"; # Owner of project files
Nix Commands Fail
error: cannot open connection to remote store 'daemon'
Cause: Nix daemon socket not accessible.
Solution: This usually indicates a container configuration issue. Reset the sandbox:
forage-ctl reset myproject
JJ Workspace Issues
JJ Commands Fail Inside Sandbox
Error: There is no jj repo at the working directory
Cause: The .jj bind mount isn’t working.
Solution:
- Check the workspace has
.jj:
forage-ctl exec myproject -- ls -la /workspace/.jj
- The
.jj/reposhould be a symlink to the source repo. If broken, recreate the sandbox:
forage-ctl down myproject
forage-ctl up myproject -t claude --repo /path/to/repo
Changes Not Visible Between Sandboxes
This is expected behavior. Each JJ workspace has an independent working copy. To share changes:
- Commit in one sandbox:
# In sandbox-a
jj describe -m "My changes"
- Update in another:
# In sandbox-b
jj status # Will show changes from the shared repo
Cleanup Issues
Sandbox Won’t Delete
forage-ctl down myproject
# Hangs or fails
Solution: Force cleanup:
# Stop container manually
sudo machinectl terminate forage-myproject
# Remove metadata
sudo rm /var/lib/firefly-forage/sandboxes/myproject.json
# Clean up secrets
sudo rm -rf /run/forage-secrets/myproject
Orphaned JJ Workspace
If a sandbox was removed but the JJ workspace remains:
# List workspaces
jj workspace list -R /path/to/repo
# Remove orphan
jj workspace forget orphan-name -R /path/to/repo
rm -rf /var/lib/firefly-forage/workspaces/orphan-name
Getting Help
If you can’t resolve an issue:
- Check the GitHub issues
- Gather diagnostic information:
forage-ctl ps
journalctl -u container@forage-NAME -n 100
machinectl status forage-NAME
- Open a new issue with the diagnostic output