Secure sandboxes that boot in milliseconds. Your data stays.

Give your AI agents an instant home. Every sandbox is completely isolated, boots in ~300ms, and syncs your files to the cloud automatically. CLI, Python, and Node.js — pick your tool.

mags run 'echo Hello World'
mags run -w myproject -p 'pip install flask'
mags new dev && mags ssh dev

Run a script, persist your files to the cloud, then jump in with SSH.

from mags import Mags

m = Mags()
result = m.run_and_wait("echo Hello World")
print(result["status"])  # "completed"

pip install magpie-mags

const Mags = require('@magpiecloud/mags');
const mags = new Mags();

const result = await mags.runAndWait('echo Hello World');
console.log(result.logs);

npm install @magpiecloud/mags

Quickstart

Three steps. Your first sandbox in under a minute.

1. Install

npm install -g @magpiecloud/mags

2. Authenticate

mags login

Or set a token directly:

export MAGS_API_TOKEN="your-token"

3. Run

mags run 'echo Hello World'

Add -w myproject -p to persist files to the cloud between runs.

1. Install

pip install magpie-mags

2. Set your token

export MAGS_API_TOKEN="your-token"

Or pass it directly: Mags(api_token="...")

3. Run

from mags import Mags

m = Mags()
result = m.run_and_wait("echo Hello World")
for log in result["logs"]:
    print(log["message"])

1. Install

npm install @magpiecloud/mags

2. Set your token

export MAGS_API_TOKEN="your-token"

Or pass it directly: new Mags({ apiToken: "..." })

3. Run

const Mags = require('@magpiecloud/mags');
const mags = new Mags();

const result = await mags.runAndWait('echo Hello');
console.log(result.status); // "completed"

Usage Patterns

Run agents. Schedule jobs. Deploy apps. All sandboxed.

Install

npm install -g @magpiecloud/mags
mags login

Running Scripts

CommandDescription
mags run <script>Run a script in a fresh sandbox. Fastest — no workspace, no persistence.
mags run -w <name> <script>Run with a named workspace. Data stays on the VM only — deleted after 10 min idle.
mags run -w <name> -p <script>Run with a persistent workspace. Files synced to S3 and survive across runs indefinitely.
mags run --url --port <port> <script>Request a public HTTPS URL for your sandbox (requires -p).
mags run --no-sleep <script>Keep sandbox running 24/7, never auto-sleep (requires -p).
mags run -e <script>Ephemeral — no workspace at all, fastest possible execution.
mags run --base <workspace> <script>Use an existing workspace as a read-only base image (OverlayFS).
mags run -f <file> <script>Upload file(s) into the sandbox before running (repeatable).

Sandboxes

CommandDescription
mags new <name>Create a new sandbox. Workspace lives on local disk only.
mags new <name> -pCreate a sandbox with persistent workspace — synced to S3.
mags exec <name> <cmd>Execute a command on an existing sandbox.
mags ssh <name>SSH into a sandbox. Auto-starts if sleeping or stopped.

Management

CommandDescription
mags listList recent jobs
mags status <id>Get job status
mags logs <id>Get job output
mags stop <id>Stop a running job
mags set <id> [options]Update VM settings (e.g. --no-sleep, --sleep)
mags sync <workspace>Sync workspace to the cloud now
mags url <id> [port]Enable public URL access
mags resize <workspace> --disk <GB>Resize workspace disk
mags workspace listList persistent workspaces
mags workspace delete <id>Delete workspace + cloud data
mags url alias <sub> <workspace>Create a stable URL alias
mags url alias listList URL aliases
mags url alias remove <sub>Delete a URL alias
mags cron add [opts] <script>Create a scheduled cron job
mags cron listList cron jobs
mags cron enable <id>Enable a cron job
mags cron disable <id>Disable a cron job
mags cron remove <id>Delete a cron job

Additional Flags

FlagDescription
-n, --name <name>Alias for -w
-e, --ephemeralNo workspace at all, fastest possible execution
--base <workspace>Use an existing workspace as a read-only base image
--disk <GB>Custom disk size in GB (default: 2)
--startup-command <cmd>Command to run when sandbox wakes from sleep

Examples

# Persistent workspace — install packages, then run your app
mags run -w myproject -p 'pip install flask requests'
mags run -w myproject -p 'python3 app.py'

# Golden image — create once, fork many times
mags run -w golden -p 'apk add nodejs npm && npm install -g typescript'
mags sync golden
mags run --base golden -w fork-1 -p 'npm test'

# Interactive sandbox with SSH
mags new dev -p
mags ssh dev
mags exec dev 'node --version'

# Always-on web server with public URL
mags run -w webapp -p --no-sleep --url --port 8080 \
  --startup-command 'python3 -m http.server 8080' \
  'python3 -m http.server 8080'

# Cron job
mags cron add --name backup --schedule "0 0 * * *" \
  -w backups 'tar czf backup.tar.gz /data'

Install

pip install magpie-mags
export MAGS_API_TOKEN="your-token"

Methods

MethodDescription
run(script, **opts)Submit a job (returns immediately)
run_and_wait(script, **opts)Submit + block until complete
new(name, **opts)Create VM sandbox (pass persistent=True for S3)
exec(name, command)Run command on existing sandbox via SSH
stop(name_or_id)Stop a running job
find_job(name_or_id)Find job by name or workspace
url(name_or_id, port)Enable public URL access
resize(workspace, disk_gb)Resize workspace disk
status(request_id)Get job status
logs(request_id)Get job logs
list_jobs()List recent jobs
update_job(request_id, **opts)Update job settings (no_sleep, startup_command)
enable_access(id, port)Enable URL or SSH access (low-level)
upload_file(path)Upload a file, returns file ID
upload_files(paths)Upload files, returns file IDs
list_workspaces()List persistent workspaces
delete_workspace(id)Delete workspace + cloud data
sync(request_id)Sync workspace to S3 now
url_alias_create(sub, ws_id)Create a stable URL alias
url_alias_list()List URL aliases
url_alias_delete(sub)Delete a URL alias
cron_create(**opts)Create a cron job
cron_list()List cron jobs
cron_update(id, **opts)Update a cron job
cron_delete(id)Delete a cron job
usage(window_days)Get usage stats

Run Options

ParameterDescription
workspace_idName the workspace. Local only unless persistent=True.
persistentKeep sandbox alive, sync workspace to S3. Files persist indefinitely.
base_workspace_idMount a workspace read-only as base image
no_sleepNever auto-sleep (requires persistent=True)
ephemeralNo workspace, no sync (fastest)
file_idsList of uploaded file IDs to include
startup_commandCommand to run when sandbox wakes

Examples

from mags import Mags
m = Mags()  # reads MAGS_API_TOKEN from env

# Run a command and wait
result = m.run_and_wait("echo Hello World")
print(result["status"])    # "completed"

# Local workspace (no S3 sync, good for analysis)
m.run_and_wait("python3 analyze.py", workspace_id="analysis")

# Persistent workspace (synced to S3)
m.run("pip install flask",
    workspace_id="my-project", persistent=True)

# Create a sandbox (local disk)
m.new("my-project")

# Create with S3 persistence
m.new("my-project", persistent=True)

# Execute commands on existing sandbox
result = m.exec("my-project", "ls -la /root")
print(result["output"])

# Public URL
m.new("webapp", persistent=True)
info = m.url("webapp", port=3000)
print(info["url"])  # https://xyz.apps.magpiecloud.com

# Always-on sandbox (never auto-sleeps)
m.run("python3 worker.py",
    workspace_id="worker", persistent=True, no_sleep=True)

# Upload files
file_ids = m.upload_files(["script.py", "data.csv"])
m.run_and_wait("python3 /uploads/script.py", file_ids=file_ids)

# Workspaces
workspaces = m.list_workspaces()
m.delete_workspace("myproject")

# Cron
m.cron_create(name="backup", cron_expression="0 0 * * *",
    script="tar czf backup.tar.gz /data", workspace_id="backups")

View on PyPI →

Install

npm install @magpiecloud/mags
export MAGS_API_TOKEN="your-token"

Methods

MethodDescription
run(script, opts)Submit a job (returns immediately)
runAndWait(script, opts)Submit + block until complete
new(name, opts)Create a VM sandbox (add persistent: true for S3)
exec(nameOrId, command)Run command on existing sandbox via SSH
stop(nameOrId)Stop a running job
findJob(nameOrId)Find job by name or workspace
url(nameOrId, port)Enable public URL access
status(requestId)Get job status
logs(requestId)Get job logs
list()List recent jobs
updateJob(requestId, opts)Update job settings (noSleep, startupCommand)
enableAccess(requestId, port)Enable URL or SSH access
resize(workspace, diskGb)Resize workspace disk
uploadFiles(paths)Upload files, returns file IDs
sync(requestId)Sync workspace to S3 now
listWorkspaces()List persistent workspaces
deleteWorkspace(id)Delete workspace + cloud data
urlAliasCreate(sub, wsId)Create a stable URL alias
urlAliasList()List URL aliases
urlAliasDelete(sub)Delete a URL alias
cronCreate(opts)Create a cron job
cronList()List cron jobs
cronDelete(id)Delete a cron job
usage(opts)Get usage stats

Run Options

ParameterDescription
workspaceIdName the workspace. Local only unless persistent: true.
persistentKeep sandbox alive, sync workspace to S3. Files persist indefinitely.
baseWorkspaceIdMount a workspace read-only as base image
noSleepNever auto-sleep (requires persistent: true)
ephemeralNo workspace, no sync (fastest)
fileIdsArray of uploaded file IDs to include
startupCommandCommand to run when sandbox wakes

Examples

const Mags = require('@magpiecloud/mags');
const mags = new Mags({ apiToken: process.env.MAGS_API_TOKEN });

// Run a command and wait
const result = await mags.runAndWait('echo Hello World');
console.log(result.status);   // "completed"

// Local workspace (no S3 sync, good for analysis)
await mags.runAndWait('python3 analyze.py', { workspaceId: 'analysis' });

// Persistent workspace (synced to S3)
await mags.runAndWait('pip install flask', { workspaceId: 'myproject', persistent: true });
await mags.runAndWait('python3 app.py', { workspaceId: 'myproject', persistent: true });

// Base image
await mags.runAndWait('npm test', { baseWorkspaceId: 'golden' });
await mags.runAndWait('npm test', { baseWorkspaceId: 'golden', workspaceId: 'fork-1', persistent: true });

// Create a sandbox
await mags.new('dev', { persistent: true });

// SSH access
const job = await mags.run('sleep 3600', { workspaceId: 'dev', persistent: true });
const ssh = await mags.enableAccess(job.requestId, 22);
console.log(`ssh root@${ssh.sshHost} -p ${ssh.sshPort}`);

// Public URL
const webJob = await mags.run('python3 -m http.server 8080', {
  workspaceId: 'webapp', persistent: true,
  startupCommand: 'python3 -m http.server 8080',
});
const { url } = await mags.url('webapp', 8080);
console.log(url);

// Always-on sandbox (never auto-sleeps)
await mags.run('python3 worker.py', {
  workspaceId: 'worker', persistent: true, noSleep: true,
});

// Upload files
const fileId = await mags.uploadFile('script.py');
await mags.runAndWait('python3 /uploads/script.py', { fileIds: [fileId] });

// Cron
await mags.cronCreate({
  name: 'backup', cronExpression: '0 0 * * *',
  script: 'tar czf backup.tar.gz /data', workspaceId: 'backups',
});

View on npm →

Persistent Workspaces

Your files survive. Every sandbox starts clean.

Backed up to object storage

  • Use -w for a local workspace (no cloud sync, good for throwaway analysis)
  • Add -p to sync to S3 — files, packages, and configs persist indefinitely
  • Clone a workspace as a read-only base for new sandboxes
  • Survives reboots, sleep, and agent restarts (with -p)

Fully isolated

  • Every sandbox runs in its own isolated environment
  • No cross-user access — workspaces are private
  • Processes, memory, and ports reset between runs
  • Agents can't escape or affect the host

Always-On Servers

Keep your sandboxes running forever.

Never auto-sleep

By default, persistent sandboxes auto-sleep after 10 minutes of inactivity to save resources. With the --no-sleep flag, your VM stays running 24/7 — perfect for web servers, workers, and background processes.

# CLI
mags run -w my-api -p --no-sleep --url --port 3000 'node server.js'

# Python
m.run("node server.js",
    workspace_id="my-api", persistent=True, no_sleep=True)

# Node.js
await mags.run('node server.js', {
  workspaceId: 'my-api', persistent: true, noSleep: true,
});

Auto-recovery

Always-on sandboxes are automatically monitored. If the host goes down, your VM is re-provisioned on a healthy server within ~60 seconds — no manual intervention needed.

How it works

  • Requires -p (persistent) flag
  • VM stays in running state indefinitely
  • Combine with --url to expose a public HTTPS endpoint
  • Use --startup-command to auto-restart your process if the VM recovers
  • Files persist to the cloud via workspace sync

SDKs + API

Let your agents spin up sandboxes programmatically.

Install

pip install magpie-mags

Quick example

from mags import Mags

m = Mags()  # reads MAGS_API_TOKEN from env

# Create a sandbox, run commands on it
m.new("demo")  # local disk; use persistent=True for S3
result = m.exec("demo", "uname -a")
print(result["output"])

# Or run a one-shot script
result = m.run_and_wait("echo Hello!")
print(result["status"])  # "completed"

Available methods

  • run(script, **opts) — submit a job
  • run_and_wait(script, **opts) — submit + block
  • new(name, **opts) — create VM sandbox
  • exec(name, command) — run on existing sandbox
  • stop(name_or_id) — stop a job
  • find_job(name_or_id) — find by name/workspace
  • url(name_or_id, port) — enable public URL
  • resize(workspace, disk_gb) — resize disk
  • status(id) / logs(id) / list_jobs()
  • upload_file(path) / upload_files(paths)
  • list_workspaces() / delete_workspace(id)
  • sync(id) — sync workspace to S3
  • cron_create(**opts) / cron_list() / cron_delete(id)

PyPI →

Install

npm install @magpiecloud/mags

Quick example

const Mags = require('@magpiecloud/mags');

const mags = new Mags({
  apiToken: process.env.MAGS_API_TOKEN,
});

const result = await mags.runAndWait('echo Hello World');
console.log(result.status);
console.log(result.logs);

Available methods

  • run(script, opts) — submit a job
  • runAndWait(script, opts) — submit + block
  • new(name, opts) — create VM sandbox
  • exec(nameOrId, command) — run on existing sandbox
  • stop(nameOrId) — stop a job
  • findJob(nameOrId) — find by name/workspace
  • url(nameOrId, port) — enable public URL
  • status(id) / logs(id) / list()
  • enableAccess(requestId, port) — URL or SSH
  • uploadFiles(paths) — upload files
  • listWorkspaces() / deleteWorkspace(id)
  • sync(id) — sync workspace to S3
  • cronCreate(opts) / cronList() / cronDelete(id)

npm →

Submit a job

curl -X POST https://api.magpiecloud.com/api/v1/mags-jobs \
  -H "Authorization: Bearer $MAGS_API_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{
    "script": "echo Hello World",
    "type": "inline",
    "workspace_id": "myproject"
  }'

Endpoints

  • POST /mags-jobs — submit job
  • GET /mags-jobs — list jobs
  • GET /mags-jobs/:id/status — status
  • GET /mags-jobs/:id/logs — logs
  • POST /mags-jobs/:id/access — URL/SSH
  • POST /mags-jobs/:id/stop — stop job
  • POST /mags-jobs/:id/sync — sync workspace
  • PATCH /mags-jobs/:id — update
  • POST /mags-files — upload file
  • GET /mags-workspaces — list ws
  • DELETE /mags-workspaces/:id — delete ws
  • POST /mags-url-aliases — create alias
  • GET /mags-url-aliases — list aliases
  • DELETE /mags-url-aliases/:sub — delete alias
  • POST /mags-cron — create cron
  • GET /mags-cron — list cron
  • PATCH /mags-cron/:id — update cron
  • DELETE /mags-cron/:id — delete cron

Full API reference →

Resources

Everything you need to get started.

Login

Sign in with Google or email to access jobs and tokens.

Open login

Usage + jobs

View usage summaries and recent jobs.

Open usage

API tokens

Create and manage tokens for CLI and SDK access.

Open tokens

Claude skill

Install the Claude Code skill to run sandboxes from Claude.

Open Claude skill

Cookbook

Copy ready-to-run recipes for common workflows.

Open cookbook