A small template for training and serving DreamerV3 world models on Modal. Define a YAML, run one command, and DreamerV3 trains in a remote container — checkpoints persist, inference is one call away. The source lives on GitHub if you want to poke around.
$ pip install -e . $ mirage train --config configs/dreamerv3_cartpole.yaml $ mirage play --config configs/dreamerv3_cartpole.yaml --episodes 5 $ mirage dream --config configs/dreamerv3_cartpole.yaml --episodes 1
project: name: dreamerv3-mirage run_name: dreamerv3-cartpole seed: 42 modal: gpu: A10G timeout_seconds: 28800 volume_name: worldmodel-checkpoints environment: name: dmc-Cartpole-balance observation_type: image max_episode_steps: 1000 image_size: 64 dreamer: repo_url: https://github.com/burchim/DreamerV3-PyTorch.git repo_ref: master repo_dir: /opt/DreamerV3-PyTorch config_path: configs/DreamerV3/dreamer_v3.py env_name: dmc-Cartpole-balance mode: training log_dreamer_metrics: true override_config: num_envs: 4 epochs: 10 epoch_length: 2000 batch_size: 16 # was 16 — use the GPU L: 64 # was 32 — longer sequences
dmc-Cartpole-balance, recorded by mirage play on Modal and downloaded locally. Episode return: 969.13.