Skip to main content
The Plato Hub CLI provides an interactive terminal interface for creating, managing, and snapshotting virtual machines for simulator development.

Starting the CLI

Launch the interactive CLI from your terminal:
uv run plato hub
Or if installed globally:
plato hub
When you launch the CLI, you’ll see the main menu: Plato Hub Main Menu The main menu offers:
  • Launch Environment - Create or resume a virtual machine
  • Configuration - View API key and settings
  • Quit - Exit the CLI

Creating a Virtual Machine

Select “Launch Environment” to begin. You have three options for creating a VM:

Option 1: Blank VM (Manual Configuration)

Configure VM resources through interactive prompts: Configure Virtual Machine You’ll be prompted for:
  • CPU Count - Number of virtual CPUs (1-2)
  • Memory (MB) - RAM allocation (128-4096 MB)
  • Disk Space (MB) - Storage allocation (1024-102400 MB)
  • Dataset Name - Name for the dataset in plato-config.yml
  • Service Name - Name of the service (e.g., my-app, api-service)
  • Save Configuration - Optionally save to plato-config.yml
After configuration, click “Create!” to provision the VM.

Option 2: From plato-config.yml

If you have a plato-config.yml file in your current directory, the CLI will automatically detect it and offer to use that configuration. This is the recommended approach for existing simulators.

Option 3: From Existing Simulator

Browse and select from available simulators: Simulator Selector Use the filter to search for specific simulators, then select one to see available artifacts (snapshots): Artifact Selector Select an artifact to resume from an existing VM snapshot. This allows you to:
  • Resume development from a previous state
  • Test specific versions
  • Deploy production snapshots

Virtual Machine Management

Once your VM is created, you’ll see the Virtual Machine Management screen: VM Management The management screen shows:

VM Information

  • Job ID - Unique identifier for the VM
  • Dataset - Which dataset is running
  • URL - Public URL to access your running simulator (https://sims.plato.so/…)
  • Hub Repo - Git repository URL

Connection Info

  • SSH Command - Direct SSH access to the VM
    ssh -F /home/ubuntu/.plato/ssh_11.conf sandbox-11
    

Status

Real-time status updates showing:
  • Docker authentication status
  • Service startup progress
  • Health check results

Available Actions

1. Start Service

Starts the service defined in your plato-config.yml:
  • Reads service configuration from plato-config.yml
  • Launches Docker Compose services
  • Waits for required containers to be healthy
  • Shows progress and health status
When to use: After creating a blank VM or when you need to restart services.

2. Start Plato Worker

Starts the Plato worker process for mutation tracking:
  • Connects to database listeners defined in plato-config.yml
  • Monitors database changes in real-time
  • Enables state tracking for snapshots
When to use: After services are running and healthy. Required before creating snapshots.

3. Connect to Cursor/VSCode

Opens your preferred code editor connected to the VM via SSH:
  • Passwordless access using SSH keys
  • Direct file editing in the VM filesystem
  • Terminal access within your editor
  • Remote debugging capabilities
Workflow:
  1. Select your editor (Cursor or VSCode)
  2. Editor opens with remote connection
  3. Edit code directly on the VM
  4. Changes take effect immediately

4. Snapshot VM

Creates a snapshot of the current VM state:
  • Captures entire VM disk state
  • Includes all running services and data
  • Generates a unique artifact ID
  • Snapshot can be used to launch new VMs
Important: Only snapshot after:
  • Services are running and healthy
  • Plato worker is running
  • You’ve tested the simulator works correctly
Snapshots become available in the artifact selector for future use.

5. Advanced Options

Advanced VM management features: Advanced VM Management

Authenticate ECR

Authenticates Docker with AWS ECR on the VM:
  • Required for pulling private Docker images from ECR
  • Authentication valid for 12 hours
  • Automatically handles credentials
When to use: If your Docker Compose file references ECR images.

Open Proxytunnel

Creates a local port forward to a VM port:
  • Forwards VM port to your localhost
  • Useful for debugging APIs locally
  • Access VM services from your development machine
Use cases:
  • Database debugging (forward port 5432 to localhost)
  • API testing (forward app port to localhost)
  • Direct service access without public URL
Example: Forward VM port 5432 to localhost:5432 for PostgreSQL access.

6. Close VM

Shuts down and cleans up the VM:
  • Stops all running services
  • Releases VM resources
  • Cleans up SSH configuration
Important: Always close VMs when finished to avoid unnecessary resource usage.

Development Workflow

Typical Development Session

1

Launch VM

Create VM from plato-config.yml or existing artifact
2

Start Services

Select “Start Service” to launch your applicationWait for all containers to be healthy
3

Start Worker

Select “Start Plato Worker” to enable mutation trackingWait for worker to be healthy
4

Test & Develop

  • Access simulator at the provided URL
  • SSH into VM for debugging
  • Connect editor for code changes
  • Use proxytunnel for local debugging
5

Create Snapshot

Once everything works, select “Snapshot VM”This creates an artifact you can resume from later
6

Close VM

When finished, select “Close VM” to clean up

SSH Access

Use the provided SSH command to access the VM:
# SSH into the VM
ssh -F /home/ubuntu/.plato/ssh_11.conf sandbox-11

# Common commands once connected
docker ps                    # Check running containers
docker compose logs -f       # View service logs
systemctl status plato-*     # Check Plato services
cd /opt/plato               # Navigate to simulator code

File Locations in VM

/opt/plato

Your simulator code (synced from repository)

/home/plato

Plato data (db_signals, logs, etc.)

Configuration Examples

Example Configuration

Here’s a complete example of a plato-config.yml file with detailed comments:
plato-config.yml
service: baserow  # Service name for this simulator

datasets:
  base: &base  # Anchor for sharing config across datasets
    compute: &base_compute
      cpus: 1                    # Number of vCPUs (1-8)
      memory: 3072               # RAM in MB (512-16384)
      disk: 10240                # Disk in MB (1024-102400)
      app_port: 80               # Port your app listens on
      plato_messaging_port: 7000 # Port for Plato worker (keep unless conflicting with an app port)

    metadata: &base_metadata
      name: BaseRow
      description: BaseRow Simulator
      source_code_url: unknown
      start_url: https://sims.plato.so
      license: GPL-3.0
      variables:                 # Login credentials for the simulator
        - name: username
          value: admin
        - name: password
          value: admin123
      flows_path: base/flows.yaml

    services: &base_services
      main_app:                  # Service identifier (choose any name)
        type: docker-compose     # Only docker-compose supported
        file: base/docker-compose.yml
        healthy_wait_timeout: 600  # Seconds to wait for health checks
        required_healthy_containers:
          - backend              # Must match container name in compose file
          # Add multiple containers if needed:
          # - frontend
          # - worker

    listeners: &base_listeners
      db:                        # Listener identifier (choose any name)
        type: db                 # Type: db, file (more types coming)
        db_type: postgresql      # postgresql, mysql, or sqlite
        db_host: 127.0.0.1       # Use 127.0.0.1 or container name
        db_port: 5432
        db_user: baserow
        db_password: baserow
        db_database: baserow
        volumes:                 # Mount volumes for signal exchange
          - /home/plato/db_signals:/tmp/postgres-signals
          # Additional volume examples:
          # - /home/plato/logs:/var/log/app       # Share logs
          # - /home/plato/config:/app/config:ro   # Read-only config
          # - /home/plato/uploads:/app/uploads    # Shared uploads

Multiple Datasets with YAML Anchors

Use YAML anchors (&, *, <<:) to share configuration across datasets:
plato-config.yml
service: myapp  # Service name for this simulator

datasets:
  base: &base                    # Define base config with anchor
    compute: &base_compute
      cpus: 1                    # Number of vCPUs
      memory: 3072
      disk: 10240
      app_port: 80
      plato_messaging_port: 7000 # Port for Plato worker (keep unless conflicting with an app port)
    metadata: &base_metadata
      name: MyApp Base
      start_url: https://sims.plato.so
      variables:
        - name: username
          value: user
        - name: password
          value: pass
      # ... other metadata
    services: &base_services
      # ... services config
    listeners: &base_listeners
      # ... listeners config

  test:
    <<: *base                    # Merge all base config
    compute:
      <<: *base_compute          # Merge base compute
      cpus: 2                    # Override: more vCPUs for test
      memory: 4096               # Override: more memory for test
    metadata:
      <<: *base_metadata
      name: MyApp Test           # Override: different name
      variables:                 # Override: test-specific credentials
        - name: username
          value: test_user
        - name: password
          value: test_pass
        - name: DEBUG
          value: "true"

Using Different Datasets

The CLI will prompt you to select a dataset if multiple are defined in plato-config.yml.

Troubleshooting

Check your plato-config.yml for valid resource limits:
  • vCPUs: 1-8
  • Memory: 512-16384 MB
  • Disk: 1024-102400 MB
Ensure your API key is valid in the Configuration menu.
  • Check your docker-compose.yml syntax
  • Verify image names and tags are accessible
  • Review service dependencies and health checks
  • Check Docker logs via SSH
  • Authenticate with ECR if using private images (Advanced menu)
  • Wait a moment for chisel tunnel to establish
  • Try the SSH command manually from the CLI output
  • Check if SSH key was generated properly in ~/.plato/
  • Ensure you have the code command installed and bound to VSCode or Cursor
  • Install VS Code Remote-SSH extension
  • Use the SSH config path shown in the CLI
  • Check that the SSH tunnel is active
  • Try connecting manually with the SSH command first
  • Ensure services are running and healthy
  • Verify Plato worker is running
  • Check you have sufficient disk space
  • Review service logs for errors
  • Ensure the VM port is accessible
  • Check no local service is using the same port
  • Verify the service is running on that port in the VM
Remember to close your VM when finished to avoid leaving resources running unnecessarily. Use the “Close VM” action from the management menu.

Next Steps