A toolkit for training whole-arm reinforcement learning policies that manipulate and clear clusters of deformable objects. Combines point-cloud perception and proprioceptive touch to enable contact-rich, full-arm interaction.
Code implementation for the paper: Deformable Cluster Manipulation via Whole-Arm Policy Learning: https://sites.google.com/view/dcmwap
- Ubuntu 18.04 or 20.04
- Python 3.8
- Nvidia Graphics Card to run Isaac Gym
- Separate Conda or Venv
- Install Pytorch
- Download & Install Isaac Gym (simulator): https://docs.robotsfan.com/isaacgym/install.html
- Clone & Install Isaac Gym Envs (RL): https://github.com/isaac-sim/IsaacGymEnvs
- Set spot_path environment variable e.g.,
export spot_path="/home/<path>/<to>/dcmwap" export isaacgymenvs_path="/home/<path>/IsaacGymEnvs/isaacgymenvs"
- Test basic RL examples (with UI) as described in the Isaac Gym Envs documentation
python train.py task=Ant
- Install required libraries (refer versions inside requirements.txt)
To run notebooks first set spot_path in command line and then
cd "$spot_path/notebooks"
jupyter-lab > Note: Set the "$spot_path/source" folder as source to run directly in IDE.
> E.g. In Pycharm Settings >> Project Structure >> Source Folders >> add $spot_path/source
-
Forest Generator: Generates URDF files for rigid body tree structure based on configuration file Choose
lsystems1for Single axis rotation andlsystems2for fully deformable branches.Config: $spot_path/source/simulation/lsystems2/three/conf/{tree}.yaml Generate Basic LSystem File : python $spot_path/source/simulation/lsystems2/three/assemblers/pipeline_runner.py Target: $spot_path/source/simulation/lsystems2/three/urdf/.../<raw_train or raw_test>/
-
Run the pre rl steps. a. For each tree find out the degree of rotation so that the flexible dof-links and the robot is the closest. b. Generate optimal poses for the power line.
python source/exposure/ige/pre_task_coordinator.py yaml=RealKinovaTreePlineClearer.yaml python source/exposure/ige/pre_task_coordinator.py yaml=RealKinovaTreePlineClearer.yaml test=True
-
Training the contact classifier. Capture the dataset with physical interactions and run
$spot_path/notebooks/work2/kinova/kinova collison classifier- 1c - build dataset n classify - velocity.ipynb -
Add the entries in local Isaac Gym Envs to run the gym RL tasks
E.g. in "<local>/IsaacGymEnvs/isaacgymenvs/tasks/__init__.py" add the entries for the run tasks: from .real_kinova_tree_pline_clearer import RealKinovaTreePlineClearer "RealKinovaTreePlineClearer": RealKinovaTreePlineClearer,
-
Reinforcement Learning (using isaacgymenvs/rl_games package). Set the path & run the shell script as below
export isaacgymenvs_path="<path>/<to>/IsaacGymEnvs/isaacgymenvs" # Train bash $spot_path/source/exposure/ige/ige_task_runner.sh task=RealKinovaTreePlineClearer num_envs=6144 headless=True # Simulation Test bash $spot_path/source/exposure/ige/ige_task_runner.sh task=RealKinovaTreePlineClearer test=True num_envs=512 checkpoint=runs/<checkpoint>/nn/RealKinovaTreePlineClearer.pth headless=True +load_saved_checkpoint=True +debug_display_env=27
-
Real Execution
# calibration (sam_hq): spot$ python source/exposure/ige/helpers/real/calib/real_camera_static_calib.py --store-calib-color-image (sam_hq) spot$ python source/exposure/ige/helpers/real/calib/real_camera_static_calib.py # generate first segmentaiton mask with dino + sam (simulation) spot$ python source/exposure/ige/helpers/real/real_image_server.py (sam_hq) spot$ python source/exposure/ige/helpers/real/real_image_mask_gen_client.py --fetch-live-frame-stream --segment-frame-stream --visualize --save-sam-mask # live RL run (simulation) spot$ python source/exposure/ige/helpers/real/real_image_server.py (simulation) spot$ bash $spot_path/source/exposure/ige/ige_task_runner.sh task=RealKinovaTreePlineClearer test=True num_envs=1 checkpoint=runs/<check_point>/nn/RealKinovaTreePlineClearer.pth headless=True +debug_display_env=27 +real=True # optional saved file check (sam_hq) spot$ python source/exposure/ige/helpers/real/debug/real_vision_debug_pc_inspector.py
During sim-to-real transfer, we stand up an external web service (running on a machine connected to the robot) that operates Kinova-ROS and interfaces with the RL pipeline
Ensure ROS-1 (for Kinova Jaco2) is installed as described in https://github.com/Kinovarobotics/kinova-ros
Modules that will end up being part of external builds. For e.g. part of kinova-ros package to run Jaco arm.
Note: Requires additional path settings in IDE to be interpretable
In Pycharm Settings >> Project Structure >> Source Folders >>
1. Add local ROS libraries as content root : E.g., "/opt/ros/noetic/share:
2. Add kinova_msgs as content root from local installation of kinova_ros: E.g., "catkin_ws/src/kinova-ros/kinova-msgs"
To run,
export spot_path="/home/<path>/<to>/dcmwap"
export kinova_ros_path="/home/<path>/<to>/catkin_ws/src/kinova-ros"
bash $spot_path/external/kinova_spot_installer.sh
cd "${kinova_ros_path}/../../"
# in different terminals
roslaunch kinova_bringup kinova_robot.launch kinova_robotType:=j2n6s300
rosrun spot_control tactile_rl_txn_control.py j2n6s300 0
Service Commands are listed below. To test, on firing the below fetch_kinova_metrics URL in browser, you should get the current kinova dof pos, vel, EE metrics etc as a json.
http://ip:3738/fetch_kinova_metrics
http://ip:3738/shutdown
http://ip:3738/set_dof_pos
@article{jacob2026deformable,
title={Deformable Cluster Manipulation via Whole-Arm Policy Learning},
author={Jacob, Jayadeep and Zhang, Wenzheng and Warren, Houston and Borges, Paulo and Bandyopadhyay, Tirthankar and Ramos, Fabio},
journal={IEEE Robotics and Automation Letters},
year={2026},
publisher={IEEE}
}
While most work in this repository is original, some are taken/inspired from external github sources.
https://github.com/MFreidank/pysgmcmc/tree/pytorch
https://github.com/EugenHotaj/pytorch-generative/blob/master/pytorch_generative/models/kde.py
https://github.com/ThomasLENNE/L-system
https://github.com/NVIDIA-Omniverse/IsaacGymEnvs
https://github.com/facebookresearch/pytorch3d/tree/main
https://github.com/NVlabs/storm/tree/main
https://github.com/facebookresearch/differentiable-robot-model
