Skip to content

jayadeepj/dcmwap

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deformable Cluster Manipulation via Whole-Arm Policy Learning

A toolkit for training whole-arm reinforcement learning policies that manipulate and clear clusters of deformable objects. Combines point-cloud perception and proprioceptive touch to enable contact-rich, full-arm interaction.

Code implementation for the paper: Deformable Cluster Manipulation via Whole-Arm Policy Learning: https://sites.google.com/view/dcmwap

L-System Forest & Powerline

Pre requisites

  1. Ubuntu 18.04 or 20.04
  2. Python 3.8
  3. Nvidia Graphics Card to run Isaac Gym
  4. Separate Conda or Venv

Installations

Isaac Packages

  1. Install Pytorch
  2. Download & Install Isaac Gym (simulator): https://docs.robotsfan.com/isaacgym/install.html
  3. Clone & Install Isaac Gym Envs (RL): https://github.com/isaac-sim/IsaacGymEnvs
  4. Set spot_path environment variable e.g.,
    export spot_path="/home/<path>/<to>/dcmwap"
    export isaacgymenvs_path="/home/<path>/IsaacGymEnvs/isaacgymenvs"
  5. Test basic RL examples (with UI) as described in the Isaac Gym Envs documentation
    python train.py task=Ant
  6. Install required libraries (refer versions inside requirements.txt)

Notebooks

To run notebooks first set spot_path in command line and then

cd "$spot_path/notebooks"
jupyter-lab

IDE

   > Note: Set the "$spot_path/source" folder as source to run directly in IDE.       
   > E.g. In Pycharm Settings >> Project Structure >> Source Folders >> add $spot_path/source

Execution

  1. Forest Generator: Generates URDF files for rigid body tree structure based on configuration file Choose lsystems1 for Single axis rotation and lsystems2 for fully deformable branches.

    Config: $spot_path/source/simulation/lsystems2/three/conf/{tree}.yaml
    Generate Basic LSystem File : python $spot_path/source/simulation/lsystems2/three/assemblers/pipeline_runner.py
    Target: $spot_path/source/simulation/lsystems2/three/urdf/.../<raw_train or raw_test>/
  2. Run the pre rl steps. a. For each tree find out the degree of rotation so that the flexible dof-links and the robot is the closest. b. Generate optimal poses for the power line.

    python source/exposure/ige/pre_task_coordinator.py yaml=RealKinovaTreePlineClearer.yaml
    python source/exposure/ige/pre_task_coordinator.py yaml=RealKinovaTreePlineClearer.yaml test=True
  3. Training the contact classifier. Capture the dataset with physical interactions and run

    $spot_path/notebooks/work2/kinova/kinova collison classifier- 1c - build dataset n classify - velocity.ipynb
  4. Add the entries in local Isaac Gym Envs to run the gym RL tasks

    E.g. in "<local>/IsaacGymEnvs/isaacgymenvs/tasks/__init__.py" add the entries for the run tasks:
    from .real_kinova_tree_pline_clearer import RealKinovaTreePlineClearer
    "RealKinovaTreePlineClearer": RealKinovaTreePlineClearer, 
    
  5. Reinforcement Learning (using isaacgymenvs/rl_games package). Set the path & run the shell script as below

     export isaacgymenvs_path="<path>/<to>/IsaacGymEnvs/isaacgymenvs"
     # Train
     bash $spot_path/source/exposure/ige/ige_task_runner.sh task=RealKinovaTreePlineClearer num_envs=6144  headless=True
     # Simulation Test
     bash $spot_path/source/exposure/ige/ige_task_runner.sh task=RealKinovaTreePlineClearer test=True num_envs=512 checkpoint=runs/<checkpoint>/nn/RealKinovaTreePlineClearer.pth headless=True +load_saved_checkpoint=True +debug_display_env=27
  6. Real Execution

    # calibration
    (sam_hq): spot$ python source/exposure/ige/helpers/real/calib/real_camera_static_calib.py --store-calib-color-image
    
    (sam_hq) spot$ python source/exposure/ige/helpers/real/calib/real_camera_static_calib.py
    
    
    # generate first segmentaiton mask with dino + sam
    (simulation) spot$ python source/exposure/ige/helpers/real/real_image_server.py
    
    (sam_hq) spot$ python source/exposure/ige/helpers/real/real_image_mask_gen_client.py --fetch-live-frame-stream --segment-frame-stream --visualize --save-sam-mask
    
    
    # live RL run
    (simulation) spot$ python source/exposure/ige/helpers/real/real_image_server.py
    
    (simulation) spot$ bash $spot_path/source/exposure/ige/ige_task_runner.sh task=RealKinovaTreePlineClearer test=True num_envs=1  checkpoint=runs/<check_point>/nn/RealKinovaTreePlineClearer.pth headless=True  +debug_display_env=27 +real=True
    
    
    # optional saved file check
    (sam_hq) spot$ python source/exposure/ige/helpers/real/debug/real_vision_debug_pc_inspector.py
    

Real Executions ROS & API

During sim-to-real transfer, we stand up an external web service (running on a machine connected to the robot) that operates Kinova-ROS and interfaces with the RL pipeline

Ensure ROS-1 (for Kinova Jaco2) is installed as described in https://github.com/Kinovarobotics/kinova-ros

Modules that will end up being part of external builds. For e.g. part of kinova-ros package to run Jaco arm.

  Note: Requires additional path settings in IDE to be interpretable
  In Pycharm Settings >> Project Structure >> Source Folders >> 
     1. Add local ROS libraries as content root : E.g., "/opt/ros/noetic/share:
     2. Add kinova_msgs as content root from local installation of kinova_ros: E.g., "catkin_ws/src/kinova-ros/kinova-msgs"

To run,

 export spot_path="/home/<path>/<to>/dcmwap"
 export kinova_ros_path="/home/<path>/<to>/catkin_ws/src/kinova-ros"
 bash $spot_path/external/kinova_spot_installer.sh
 
 cd "${kinova_ros_path}/../../"
 # in different terminals
 roslaunch kinova_bringup kinova_robot.launch kinova_robotType:=j2n6s300

Real Hardware API (to connect to Jaco)

rosrun spot_control tactile_rl_txn_control.py j2n6s300 0

Service Commands are listed below. To test, on firing the below fetch_kinova_metrics URL in browser, you should get the current kinova dof pos, vel, EE metrics etc as a json.

http://ip:3738/fetch_kinova_metrics
http://ip:3738/shutdown
http://ip:3738/set_dof_pos

Citation

@article{jacob2026deformable,
  title={Deformable Cluster Manipulation via Whole-Arm Policy Learning},
  author={Jacob, Jayadeep and Zhang, Wenzheng and Warren, Houston and Borges, Paulo and Bandyopadhyay, Tirthankar and Ramos, Fabio},
  journal={IEEE Robotics and Automation Letters},
  year={2026},
  publisher={IEEE}
}

Copyrights

While most work in this repository is original, some are taken/inspired from external github sources.

https://github.com/MFreidank/pysgmcmc/tree/pytorch
https://github.com/EugenHotaj/pytorch-generative/blob/master/pytorch_generative/models/kde.py
https://github.com/ThomasLENNE/L-system
https://github.com/NVIDIA-Omniverse/IsaacGymEnvs
https://github.com/facebookresearch/pytorch3d/tree/main
https://github.com/NVlabs/storm/tree/main
https://github.com/facebookresearch/differentiable-robot-model

⚠️ Note: This repository is not actively maintained, but pull requests are welcome.

About

A toolkit for training whole-arm reinforcement learning policies that manipulate and clear clusters of deformable objects. Combines point-cloud perception and proprioceptive touch to enable contact-rich, full-arm interaction.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages