Mags Documentation

Mags gives AI agents and developers secure, isolated sandboxes that boot in ~300ms. Workspaces persist automatically to the cloud. Run code, schedule jobs, and let agents work safely.

Available via CLI, Python SDK, Node.js SDK, and REST API.

npm install -g @magpiecloud/mags
mags run 'echo Hello World'

Quickstart

Get your first sandbox running in under a minute.

1. Install the CLI

npm install -g @magpiecloud/mags

2. Authenticate

mags login

Or set a token directly:

export MAGS_API_TOKEN="your-token"

3. Run a command

mags run 'echo Hello World'

Add -w myproject -p to persist files to the cloud between runs.

Authentication

There are two ways to authenticate with Mags.

Browser login

mags login

Opens a browser for Google sign-in. Credentials are stored in ~/.mags/config.json.

API token

export MAGS_API_TOKEN="your-token"

Generate tokens from the token dashboard. Tokens are shown once when created — store them securely.

The environment variable takes precedence over mags login credentials.

Running Scripts

The mags run command submits a script to run in a fresh sandbox.

CommandDescription
mags run <script>Run a script in a fresh sandbox. Fastest — no workspace, no persistence.
mags run -w <name> <script>Run with a named workspace. Data stays on the VM only — deleted after 10 min idle.
mags run -w <name> -p <script>Run with a persistent workspace. Files synced to S3 and survive across runs indefinitely.
mags run --url --port <port> <script>Request a public HTTPS URL for your sandbox (requires -p).
mags run --no-sleep <script>Keep sandbox running 24/7, never auto-sleep (requires -p).
mags run -e <script>Ephemeral — no workspace at all, fastest possible execution.
mags run --base <workspace> <script>Use an existing workspace as a read-only base image (OverlayFS).
mags run -f <file> <script>Upload file(s) into the sandbox before running (repeatable).

Sandboxes

Create long-lived sandboxes you can SSH into and run commands on.

CommandDescription
mags new <name>Create a new sandbox. Workspace lives on local disk only.
mags new <name> -pCreate a sandbox with persistent workspace — synced to S3.
mags exec <name> <cmd>Execute a command on an existing sandbox.
mags ssh <name>SSH into a sandbox. Auto-starts if sleeping or stopped.

Management

Commands for listing, inspecting, and controlling jobs and workspaces.

CommandDescription
mags listList recent jobs
mags status <id>Get job status
mags logs <id>Get job output
mags stop <id>Stop a running job
mags set <id> [options]Update VM settings (e.g. --no-sleep, --sleep)
mags sync <workspace>Sync workspace to the cloud now
mags url <id> [port]Enable public URL access
mags resize <workspace> --disk <GB>Resize workspace disk
mags workspace listList persistent workspaces
mags workspace delete <id>Delete workspace + cloud data
mags url alias <sub> <workspace>Create a stable URL alias
mags url alias listList URL aliases
mags url alias remove <sub>Delete a URL alias
mags cron add [opts] <script>Create a scheduled cron job
mags cron listList cron jobs
mags cron enable <id>Enable a cron job
mags cron disable <id>Disable a cron job
mags cron remove <id>Delete a cron job

Flags

Additional flags available on mags run and mags new.

FlagDescription
-w, --workspace <name>Name the workspace. Local only unless -p is also set.
-p, --persistentSync workspace to S3. Files persist indefinitely.
-n, --name <name>Alias for -w
-e, --ephemeralNo workspace at all, fastest possible execution
--base <workspace>Use an existing workspace as a read-only base image
--disk <GB>Custom disk size in GB (default: 2)
--no-sleepNever auto-sleep (requires -p)
--urlRequest a public HTTPS URL
--port <port>Port to expose via public URL
-f, --file <path>Upload file(s) into sandbox before running (repeatable)
--startup-command <cmd>Command to run when sandbox wakes from sleep

CLI Examples

# Persistent workspace — install packages, then run your app
mags run -w myproject -p 'pip install flask requests'
mags run -w myproject -p 'python3 app.py'

# Golden image — create once, fork many times
mags run -w golden -p 'apk add nodejs npm && npm install -g typescript'
mags sync golden
mags run --base golden -w fork-1 -p 'npm test'

# Interactive sandbox with SSH
mags new dev -p
mags ssh dev
mags exec dev 'node --version'

# Always-on web server with public URL
mags run -w webapp -p --no-sleep --url --port 8080 \
  --startup-command 'python3 -m http.server 8080' \
  'python3 -m http.server 8080'

# Cron job
mags cron add --name backup --schedule "0 0 * * *" \
  -w backups 'tar czf backup.tar.gz /data'

Workspaces

Workspaces let you keep files between sandbox runs. There are two modes:

Local workspace (-w)

Data stays on the VM only. Good for throwaway analysis or short-lived tasks. Deleted after 10 minutes of idle.

mags run -w analysis 'python3 analyze.py'

Persistent workspace (-w -p)

Files, packages, and configs are synced to S3 and survive across runs indefinitely. Survives reboots, sleep, and agent restarts.

mags run -w myproject -p 'pip install flask'
mags run -w myproject -p 'python3 app.py'

Base images (--base)

Clone a persistent workspace as a read-only base for new sandboxes using OverlayFS. The base is never modified — writes go to the overlay.

# Create a golden image
mags run -w golden -p 'apk add nodejs npm && npm install -g typescript'
mags sync golden

# Fork from the golden image
mags run --base golden -w fork-1 -p 'npm test'

Isolation

  • Every sandbox runs in its own isolated environment
  • No cross-user access — workspaces are private
  • Processes, memory, and ports reset between runs
  • Agents can't escape or affect the host

Managing workspaces

mags workspace list              # List persistent workspaces
mags workspace delete myproject  # Delete workspace + cloud data
mags sync myproject              # Force sync to S3 now
mags resize myproject --disk 5   # Resize to 5 GB

Always-On Servers

By default, persistent sandboxes auto-sleep after 10 minutes of inactivity. With the --no-sleep flag, your VM stays running 24/7 — perfect for web servers, workers, and background processes.

# CLI
mags run -w my-api -p --no-sleep --url --port 3000 'node server.js'

# Python
m.run("node server.js",
    workspace_id="my-api", persistent=True, no_sleep=True)

# Node.js
await mags.run('node server.js', {
  workspaceId: 'my-api', persistent: true, noSleep: true,
});

Auto-recovery

Always-on sandboxes are automatically monitored. If the host goes down, your VM is re-provisioned on a healthy server within ~60 seconds — no manual intervention needed.

Requirements

  • Requires -p (persistent) flag
  • VM stays in running state indefinitely
  • Combine with --url to expose a public HTTPS endpoint
  • Use --startup-command to auto-restart your process if the VM recovers
  • Files persist to the cloud via workspace sync

Public URLs

Expose any port on your sandbox as a public HTTPS URL.

# Enable URL when running
mags run -w webapp -p --url --port 8080 'python3 -m http.server 8080'

# Enable URL on an existing job
mags url <job-id> 8080

URL aliases

Create stable, human-readable subdomains that point to a workspace.

mags url alias myapp webapp        # myapp.apps.magpiecloud.com
mags url alias list                 # List all aliases
mags url alias remove myapp         # Delete alias

Cron Jobs

Schedule scripts to run on a recurring basis.

# Create a cron job
mags cron add --name backup --schedule "0 0 * * *" \
  -w backups 'tar czf backup.tar.gz /data'

# Manage cron jobs
mags cron list
mags cron enable <id>
mags cron disable <id>
mags cron remove <id>

Cron expressions use standard 5-field format: minute hour day month weekday.

File Upload

Upload local files into a sandbox before running your script.

# CLI — use -f (repeatable)
mags run -f script.py -f data.csv 'python3 /uploads/script.py'

# Python
file_ids = m.upload_files(["script.py", "data.csv"])
m.run_and_wait("python3 /uploads/script.py", file_ids=file_ids)

# Node.js
const fileId = await mags.uploadFile('script.py');
await mags.runAndWait('python3 /uploads/script.py', { fileIds: [fileId] });

Uploaded files are placed in /uploads/ inside the sandbox.

Python SDK

pip install magpie-mags
export MAGS_API_TOKEN="your-token"

Or pass the token directly: Mags(api_token="...")

View on PyPI →

Python Methods

MethodDescription
run(script, **opts)Submit a job (returns immediately)
run_and_wait(script, **opts)Submit + block until complete
new(name, **opts)Create VM sandbox (pass persistent=True for S3)
exec(name, command)Run command on existing sandbox via SSH
stop(name_or_id)Stop a running job
find_job(name_or_id)Find job by name or workspace
url(name_or_id, port)Enable public URL access
resize(workspace, disk_gb)Resize workspace disk
status(request_id)Get job status
logs(request_id)Get job logs
list_jobs()List recent jobs
update_job(request_id, **opts)Update job settings (no_sleep, startup_command)
enable_access(id, port)Enable URL or SSH access (low-level)
upload_file(path)Upload a file, returns file ID
upload_files(paths)Upload files, returns file IDs
list_workspaces()List persistent workspaces
delete_workspace(id)Delete workspace + cloud data
sync(request_id)Sync workspace to S3 now
url_alias_create(sub, ws_id)Create a stable URL alias
url_alias_list()List URL aliases
url_alias_delete(sub)Delete a URL alias
cron_create(**opts)Create a cron job
cron_list()List cron jobs
cron_update(id, **opts)Update a cron job
cron_delete(id)Delete a cron job
usage(window_days)Get usage stats

Python Run Options

ParameterDescription
workspace_idName the workspace. Local only unless persistent=True.
persistentKeep sandbox alive, sync workspace to S3. Files persist indefinitely.
base_workspace_idMount a workspace read-only as base image
no_sleepNever auto-sleep (requires persistent=True)
ephemeralNo workspace, no sync (fastest)
file_idsList of uploaded file IDs to include
startup_commandCommand to run when sandbox wakes

Python Examples

from mags import Mags
m = Mags()  # reads MAGS_API_TOKEN from env

# Run a command and wait
result = m.run_and_wait("echo Hello World")
print(result["status"])    # "completed"

# Local workspace (no S3 sync, good for analysis)
m.run_and_wait("python3 analyze.py", workspace_id="analysis")

# Persistent workspace (synced to S3)
m.run("pip install flask",
    workspace_id="my-project", persistent=True)

# Create a sandbox (local disk)
m.new("my-project")

# Create with S3 persistence
m.new("my-project", persistent=True)

# Execute commands on existing sandbox
result = m.exec("my-project", "ls -la /root")
print(result["output"])

# Public URL
m.new("webapp", persistent=True)
info = m.url("webapp", port=3000)
print(info["url"])  # https://xyz.apps.magpiecloud.com

# Always-on sandbox (never auto-sleeps)
m.run("python3 worker.py",
    workspace_id="worker", persistent=True, no_sleep=True)

# Upload files
file_ids = m.upload_files(["script.py", "data.csv"])
m.run_and_wait("python3 /uploads/script.py", file_ids=file_ids)

# Workspaces
workspaces = m.list_workspaces()
m.delete_workspace("myproject")

# Cron
m.cron_create(name="backup", cron_expression="0 0 * * *",
    script="tar czf backup.tar.gz /data", workspace_id="backups")

Node.js SDK

npm install @magpiecloud/mags
export MAGS_API_TOKEN="your-token"

Or pass the token directly: new Mags({ apiToken: "..." })

View on npm →

Node.js Methods

MethodDescription
run(script, opts)Submit a job (returns immediately)
runAndWait(script, opts)Submit + block until complete
new(name, opts)Create a VM sandbox (add persistent: true for S3)
exec(nameOrId, command)Run command on existing sandbox via SSH
stop(nameOrId)Stop a running job
findJob(nameOrId)Find job by name or workspace
url(nameOrId, port)Enable public URL access
status(requestId)Get job status
logs(requestId)Get job logs
list()List recent jobs
updateJob(requestId, opts)Update job settings (noSleep, startupCommand)
enableAccess(requestId, port)Enable URL or SSH access
resize(workspace, diskGb)Resize workspace disk
uploadFiles(paths)Upload files, returns file IDs
sync(requestId)Sync workspace to S3 now
listWorkspaces()List persistent workspaces
deleteWorkspace(id)Delete workspace + cloud data
urlAliasCreate(sub, wsId)Create a stable URL alias
urlAliasList()List URL aliases
urlAliasDelete(sub)Delete a URL alias
cronCreate(opts)Create a cron job
cronList()List cron jobs
cronDelete(id)Delete a cron job
usage(opts)Get usage stats

Node.js Run Options

ParameterDescription
workspaceIdName the workspace. Local only unless persistent: true.
persistentKeep sandbox alive, sync workspace to S3. Files persist indefinitely.
baseWorkspaceIdMount a workspace read-only as base image
noSleepNever auto-sleep (requires persistent: true)
ephemeralNo workspace, no sync (fastest)
fileIdsArray of uploaded file IDs to include
startupCommandCommand to run when sandbox wakes

Node.js Examples

const Mags = require('@magpiecloud/mags');
const mags = new Mags({ apiToken: process.env.MAGS_API_TOKEN });

// Run a command and wait
const result = await mags.runAndWait('echo Hello World');
console.log(result.status);   // "completed"

// Local workspace (no S3 sync, good for analysis)
await mags.runAndWait('python3 analyze.py', { workspaceId: 'analysis' });

// Persistent workspace (synced to S3)
await mags.runAndWait('pip install flask', { workspaceId: 'myproject', persistent: true });
await mags.runAndWait('python3 app.py', { workspaceId: 'myproject', persistent: true });

// Base image
await mags.runAndWait('npm test', { baseWorkspaceId: 'golden' });
await mags.runAndWait('npm test', { baseWorkspaceId: 'golden', workspaceId: 'fork-1', persistent: true });

// Create a sandbox
await mags.new('dev', { persistent: true });

// SSH access
const job = await mags.run('sleep 3600', { workspaceId: 'dev', persistent: true });
const ssh = await mags.enableAccess(job.requestId, 22);
console.log(`ssh root@${ssh.sshHost} -p ${ssh.sshPort}`);

// Public URL
const webJob = await mags.run('python3 -m http.server 8080', {
  workspaceId: 'webapp', persistent: true,
  startupCommand: 'python3 -m http.server 8080',
});
const { url } = await mags.url('webapp', 8080);
console.log(url);

// Always-on sandbox (never auto-sleeps)
await mags.run('python3 worker.py', {
  workspaceId: 'worker', persistent: true, noSleep: true,
});

// Upload files
const fileId = await mags.uploadFile('script.py');
await mags.runAndWait('python3 /uploads/script.py', { fileIds: [fileId] });

// Cron
await mags.cronCreate({
  name: 'backup', cronExpression: '0 0 * * *',
  script: 'tar czf backup.tar.gz /data', workspaceId: 'backups',
});