ASK KNOX
beta
LESSON 306

Environments and Containers

The execution environment defines what your agent can do and what it can reach — configure it minimally, deliberately, and with security as a first principle.

8 min read

Every Managed Agents session executes inside a container. You do not manage the container runtime — Anthropic does. But you configure the environment that runs inside it: what packages are available, what the agent can reach on the network, what files are present at session start, and what secrets the agent can access.

Environment configuration is where least-privilege discipline actually matters. An agent with an overly permissive environment is not just a security risk — it is a reliability risk. An agent that can reach any URL, run any binary, and access any key is an agent that is harder to reason about, harder to audit, and harder to debug when something goes wrong.

What an Environment Is

An environment is a configuration object attached to an agent that specifies the execution context for its sessions:

import anthropic

client = anthropic.Anthropic()

agent = client.beta.agents.create(
    name="data-processing-agent",
    model="claude-sonnet-4-6",
    system="You are a data processing specialist...",
    tools=[{"type": "bash"}],
    environment={
        "packages": {
            "python": ["pandas==2.2.0", "numpy==1.26.0", "boto3==1.34.0"]
        },
        "network": {
            "allowed_domains": ["s3.amazonaws.com", "api.company.com"]
        },
        "env_vars": {
            "AWS_ACCESS_KEY_ID": "${AWS_ACCESS_KEY_ID}",
            "AWS_SECRET_ACCESS_KEY": "${AWS_SECRET_ACCESS_KEY}",
            "API_BASE_URL": "https://api.company.com"
        }
    }
)

The environment is evaluated when a session starts. Packages are pre-installed. Network rules are applied at the container level. Environment variables are injected before the agent's first action.

Package Configuration

Packages define what software is available to the agent when it uses the bash tool. Without specifying packages, the agent runs in a minimal environment with Python and standard Unix tools.

"packages": {
    "python": [
        "requests==2.31.0",
        "beautifulsoup4==4.12.0",
        "lxml==5.1.0"
    ],
    "system": [
        "jq",
        "curl"
    ]
}

Pin versions. An unpinned package specification gets the latest version at session start time — which may change between sessions. Production agents should produce deterministic outputs from identical inputs. Non-determinism from package version drift is an invisible source of regression.

Install only what the task requires. A research agent that uses Python only to format output does not need pandas. A data processing agent that reads CSV files does not need boto3 unless it is also writing to S3. Every installed package is a startup cost and an attack surface.

Network Rules

Network rules control what external hosts the agent can reach during a session. By default, Managed Agents sessions have restricted outbound network access. You explicitly allow the domains your agent needs.

"network": {
    "allowed_domains": [
        "api.openmetadata.org",
        "data.company.internal",
        "*.s3.amazonaws.com"
    ]
}

Wildcard domains (*.s3.amazonaws.com) allow all subdomains. Be deliberate about wildcards — *.amazonaws.com is significantly broader than s3.amazonaws.com.

For research agents using web_search, the search tool handles web access through its own allowed mechanism. You do not need to explicitly allow web search domains in the environment's network rules — the tool manages that access.

For agents that make direct HTTP requests from bash scripts, the network rules apply directly. If an agent tries to reach a domain not in the allowlist, the connection is rejected.

File System: Ephemeral vs Persistent

By default, a Managed Agents session file system is ephemeral. Files written during a session are available within that session but do not persist after it ends. The next session starts with a clean file system.

This is correct behavior for most pipeline use cases — each session is independent and self-contained.

For agents that need to persist data across sessions, configure persistent storage:

"storage": {
    "persistent": {
        "path": "/workspace",
        "volume_id": "my-agent-workspace"
    }
}

Persistent storage is shared across all sessions of the agent. Write carefully — one session's writes are visible to the next. Design your agent's file handling to treat the persistent volume like a shared database: read, transform, write with coordination in mind.

For most agents, the right pattern is: read inputs from the session input content, do work in the ephemeral file system, write outputs to the session result or to an external system (S3, database, API) before the session ends. Do not rely on persistent file system unless the task genuinely requires it.

Environment Variables and Secrets

Environment variables are the right way to inject secrets and configuration into agent sessions. Do not put secrets in the system prompt — they will appear in session logs. Do not pass them in session input content — they will appear in event streams.

"env_vars": {
    "DATABASE_URL": "${DATABASE_URL}",
    "API_KEY": "${API_KEY}",
    "ENVIRONMENT": "production"
}

The ${VAR_NAME} syntax references a secret from your organization's secret store — not a hardcoded value. Anthropic resolves the reference at session start and injects the resolved value. The secret value is not stored in the agent definition.

One important constraint: environment variables injected into the session are visible to the agent via bash commands (echo $DATABASE_URL). This is expected behavior — the agent needs to use the credentials. But it means any tool calls the agent makes that run arbitrary code have access to the injected environment. Design accordingly — do not inject secrets the agent does not need.

Least Privilege in Practice

Configure each agent's environment for exactly its task:

A research and reporting agent that searches the web and writes Markdown reports needs:

  • Packages: minimal Python (formatting only), possibly requests if it makes direct API calls
  • Network: search service domains, any specific APIs it calls
  • No persistent storage — each report is a standalone session output
  • No secrets unless it calls an authenticated API

A data pipeline agent that reads from S3, transforms data, and writes back needs:

  • Packages: pandas, boto3 (pinned)
  • Network: AWS S3 endpoints only
  • Secrets: AWS credentials
  • Possibly persistent storage if it maintains a processing checkpoint

A code analysis agent that clones repositories and runs analysis tools needs:

  • Packages: git, language-specific tools
  • Network: GitHub/GitLab API endpoints, package registry for the target language
  • Secrets: repository access token
  • No persistent storage — analysis is per-session