Dataset Overview
OpenRoboCare is a multi-task, multi-modal dataset capturing real-world caregiving routines. It includes demonstrations from 21 occupational therapists performing 15 daily caregiving tasks using two hospital manikins (Type 0: female, 37 lbs; Type 1: male, 150 lbs). Each demonstration is recorded through five synchronized sensing modalities to capture motion, contact, and visual attention, providing a comprehensive foundation for learning safe, adaptive robot caregiving.
Key Statistics
Dataset Modalities
The dataset provides five synchronized sensing modalities capturing complementary aspects of the caregiving process:
RGB-D Cameras
Three Intel RealSense D435i RGB-D cameras capture visual and geometric information of caregiver motion, interactions with the manikin, and assistive devices. Two cameras face the hospital bed from different angles, and one faces the wheelchair from behind the bed. All streams are temporally aligned at 15 Hz.
Tactile Skin
A custom piezo-resistive tactile skin with 44 pressure sensors per manikin (88 total) measures physical contact forces across arms, legs, torso, and back at 60 Hz. Sensors are designed using Velostat and copper fabric layers with a voltage divider circuit processed by an Arduino Uno, offering stable responses from 0.05–3 N/cm². Each sensor is calibrated using an ATI force/torque sensor and tared before each task to ensure consistent readings.
Pose Tracking
A 12‑camera OptiTrack PrimeX 13 system tracks caregiver and manikin motion. Caregivers wear marker gloves and hats to track head and hand motion, while rigid body markers capture manikin segment movement. Due to heavy occlusion during caregiving tasks, we manually annotated 2D keypoints for triangulation of 3D pose. We trained a YOLO keypoint detection model from a subset of human annotations, then used it to label the rest of the dataset. We provide both the 3D pose and the 2D keypoint annotations.
Eye‑Gaze Tracking
Pupil Labs eye‑tracking glasses record first‑person video and gaze data at 120 Hz. Gaze points (2D and 3D vectors) are synchronized with RGB‑D streams and motion data using timestamp alignment, revealing where therapists direct visual attention during key actions such as repositioning or dressing.
Task & Action Annotations
Task-level and action annotations are manually labeled by trained experts using video playback. Each entry includes task name, start/end timestamps, and action description, forming a hierarchical structure for temporal segmentation and analysis.
Caregiving Tasks
The dataset includes the following 15 tasks (canonical order from the paper and website):
| # | Task Name | 
|---|---|
| 1 | Bath | 
| 2 | Toilet | 
| 3 | Dress T-shirt (Bed) | 
| 4 | Undress T-shirt (Bed) | 
| 5 | Dress Vest (Bed) | 
| 6 | Undress Vest (Bed) | 
| 7 | Dress Shorts (Bed) | 
| 8 | Undress Shorts (Bed) | 
| 9 | Transfer (Bed→Wheelchair) | 
| 10 | Groom | 
| 11 | Dress T-shirt (Wheelchair) | 
| 12 | Undress T-shirt (Wheelchair) | 
| 13 | Dress Vest (Wheelchair) | 
| 14 | Undress Vest (Wheelchair) | 
| 15 | Transfer (Wheelchair→Bed) | 
Manikin Types
The dataset includes demonstrations performed on two different hospital manikins to capture variations in caregiving techniques:
- Type 0 Manikin (Simple Susie): Female manikin, 37.26 lbs, 5'5" - representing lighter patients requiring full assistance
 - Type 1 Manikin (Rescue Randy): Male manikin, 150 lbs, 6'1" - representing standard adult patients requiring full assistance
 
Both manikins feature anthropomorphic dimensions and joints commonly used in OT clinical training. This variation enables research into how caregiving strategies adapt based on patient size, weight, and physical characteristics in full-assistance scenarios.
Potential Use Cases
OpenRoboCare supports diverse research applications across computer vision and robotics. The rich multimodal data and challenging real-world scenarios provide opportunities for advancing algorithms and gaining fundamental insights into physical caregiving interactions.
Robotics Applications
- Learning from Human Demonstration – Train robots to replicate expert caregiving techniques using multimodal demonstrations.
 - Contact-Aware Motion Planning – Learn safe force limits and gentle manipulation strategies for physical human-robot interaction.
 - Primitive Skill Learning – Extract reusable motion primitives for physical robot-assisted caregiving tasks like dressing, transferring, and positioning.
 - Attention Modeling – Study expert attention allocation during physical interactions.
 - Physical HRI Safety – Analyze force patterns and safety-critical moments in expert-patient interactions.
 - Caregiving Technique Analysis – Study expert strategies for body mechanics and assistive device usage.
 - Personalization & Adaptation – Investigate how caregiving approaches vary and how robots might adapt accordingly.
 
Computer Vision Applications
- Occlusion-Robust Pose Estimation – Develop models that maintain accuracy under heavy occlusion conditions typical in caregiving scenarios.
 - Video Understanding & Annotation – Study temporal action recognition and automated annotation of complex human activities.
 - Multimodal Fusion – Integrate RGB-D, tactile, and motion data to improve perception robustness.
 
Explore the Dataset
Browse the dataset interactively to understand its structure and content:
Interactive Dataset Viewer
Getting Started
- Explore the interactive viewer above to understand the dataset structure
 - Review the Dataset Organization page for detailed file formats
 - Visit the Download page for access instructions
 
Citation
@inproceedings{Liang2025OpenRoboCare,
  title={{OpenRoboCare: A Multimodal Multi-Task Expert Demonstration Dataset for Robot Caregiving}},
  author={Xiaoyu Liang and Ziang Liu and Kelvin Lin and Edward Gu and Ruolin Ye and Tam Nguyen and Cynthia Hsu and Zhanxin Wu and Xiaoman Yang and Christy Sum Yu Cheung and Harold Soh and Katherine Dimitropoulou and Tapomayukh Bhattacharjee},
  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2025},
  url={https://emprise.cs.cornell.edu/robo-care/}
}
    Contact
For questions or collaboration inquiries, please visit the project website.