Monorepo for AI Capstone. Two workflows under one uv workspace:
- UMI — real-world data collection.
- Isaac Lab / Isaac Sim — robot motion generation, synthetic data creation, policy training/rollout via LeRobot.
Repo root is uv workspace root. Install:
uv sync| Stage | Environment | Why |
|---|---|---|
| Data creation (teleop, FSM datagen, UMI SLAM) | Docker container | Isaac Sim / Isaac Lab / Vulkan stack pinned in image |
| Rollout (policy inference in sim) | Docker container | Same Isaac Sim runtime as datagen |
Training (lerobot-train, accelerate launch) |
Host machine | Container adds I/O + GPU passthrough overhead — train natively for throughput |
Use make launch-isaaclab for container work. Run training directly against the host uv env (uv sync then lerobot-train ...).
packages/umi/— UMI packagepackages/simulator/— simulator config layer over upstreamleisaacscripts/— teleoperation, datagen, evaluation scriptsumi_pipeline_configs/— UMI SLAM pipeline configsdependencies/— vendored submodules (Isaac Lab, etc.)data/,datasets/,checkpoints/— runtime artifacts
Real-world data collection. SLAM reconstruction pipeline over recorded session.
uv run umi run-slam-pipeline <pipeline-config> --session-dir <session> --task <task>Robot motion generation, synthetic data creation. Wraps upstream leisaac with project task configs in packages/simulator/.
New to the task config layout? See Isaac Lab + LeIsaac configuration tutorial — walks through the single-arm Franka template, the cup-stacking task, UMI anchor pose loading, and a recipe for adding a new task.
CUDA 12.8 / Ubuntu 22.04 image. Installs Isaac Sim 5.1.0, Isaac Lab (submodule), simulator package, LeRobot.
Driven by Makefile. Image tag set via IMAGE (default leisaac-isaaclab:latest), Dockerfile via DOCKERFILE.
| Target | Purpose |
|---|---|
make submodules |
Init/update git submodules (dependencies/IsaacLab, etc.) |
make submodules-pull |
Pull latest submodule revisions |
make install |
submodules + uv sync (host workspace install) |
make install-dev |
submodules + uv sync --extra dev |
make build-isaaclab |
Init submodules, build Docker image |
make launch-isaaclab |
Build, launch container with GPU + X11 + workspace bind-mount, NVIDIA Vulkan ICD probe |
make check-isaaclab-gpu |
Verify NVIDIA Vulkan ICD, nvidia-smi, GLU/Xt libs, torch CUDA visibility inside image |
make test |
Run repo layout tests |
Typical first-run flow:
make submodules
make build-isaaclab
make launch-isaaclabIsaac Lab submodule must be initialized before build — Dockerfile fails fast otherwise.
- Define task. Task configs in
packages/simulator/. - Keyboard teleoperation. Run
scripts/environments/teleoperation/teleop_se3_agent.pywith task ID, device, num envs. - FSM planner datagen. Run
scripts/datagen/state_machine/generate.pywith task, num demos, recorder flags, target dataset repo ID.
Dataset transfer and training run on the host machine (training inside the container is significantly slower). Upload generated demos out of the container, then train on the host:
- Upload generated dataset. From inside the container after datagen:
hf upload <dataset-repo> <local-dataset-dir> --repo-type dataset --revision <tag>. - Download dataset on host.
hf download <dataset-repo> --repo-type dataset --local-dir <dir> --revision <rev>. - Train (single GPU, host).
lerobot-trainwith--policy.type,--dataset.repo_id,--output_dir,--policy.device, etc. - Train (multi-GPU, host).
accelerate launch --multi_gpu --num_processes=N $(which lerobot-train) <args>. - Upload checkpoints.
hf upload <model-repo> <local-ckpt-dir> --revision <tag>. - Download checkpoints (back into the container for rollout).
hf download <model-repo> --local-dir <dir> --revision <tag>.
Run trained policy in sim. Entry: scripts/evaluation/policy_inference_sync.py. Flags: --task, --policy_type, --policy_checkpoint_path, --policy_action_horizon, --device, --enable_cameras.
MIT — see LICENSE.