OpenRoboCare Dataset

Multi-Modal Expert Demonstration Dataset for Robot-Assisted Caregiving

Dataset Overview

OpenRoboCare is a multi-task, multi-modal dataset capturing real-world caregiving routines. It includes demonstrations from 21 occupational therapists performing 15 daily caregiving tasks using two hospital manikins (Type 0: female, 37 lbs; Type 1: male, 150 lbs). Each demonstration is recorded through five synchronized sensing modalities to capture motion, contact, and visual attention, providing a comprehensive foundation for learning safe, adaptive robot caregiving.

Key Statistics

21Occupational Therapists
15Caregiving Tasks
2Types of Manikins
315Total Sessions
19.8Hours of Data
31,185Total Samples

Dataset Modalities

The dataset provides five synchronized sensing modalities capturing complementary aspects of the caregiving process:

RGB-D Cameras

Three Intel RealSense D435i RGB-D cameras capture visual and geometric information of caregiver motion, interactions with the manikin, and assistive devices. Two cameras face the hospital bed from different angles, and one faces the wheelchair from behind the bed. All streams are temporally aligned at 15 Hz.

Tactile Skin

A custom piezo-resistive tactile skin with 44 pressure sensors per manikin (88 total) measures physical contact forces across arms, legs, torso, and back at 60 Hz. Sensors are designed using Velostat and copper fabric layers with a voltage divider circuit processed by an Arduino Uno, offering stable responses from 0.05–3 N/cm². Each sensor is calibrated using an ATI force/torque sensor and tared before each task to ensure consistent readings.

Pose Tracking

A 12‑camera OptiTrack PrimeX 13 system tracks caregiver and manikin motion. Caregivers wear marker gloves and hats to track head and hand motion, while rigid body markers capture manikin segment movement. Due to heavy occlusion during caregiving tasks, we manually annotated 2D keypoints for triangulation of 3D pose. We trained a YOLO keypoint detection model from a subset of human annotations, then used it to label the rest of the dataset. We provide both the 3D pose and the 2D keypoint annotations.

Eye‑Gaze Tracking

Pupil Labs eye‑tracking glasses record first‑person video and gaze data at 120 Hz. Gaze points (2D and 3D vectors) are synchronized with RGB‑D streams and motion data using timestamp alignment, revealing where therapists direct visual attention during key actions such as repositioning or dressing.

Task & Action Annotations

Task-level and action annotations are manually labeled by trained experts using video playback. Each entry includes task name, start/end timestamps, and action description, forming a hierarchical structure for temporal segmentation and analysis.

Caregiving Tasks

The dataset includes the following 15 tasks (canonical order from the paper and website):

#Task Name
1Bath
2Toilet
3Dress T-shirt (Bed)
4Undress T-shirt (Bed)
5Dress Vest (Bed)
6Undress Vest (Bed)
7Dress Shorts (Bed)
8Undress Shorts (Bed)
9Transfer (Bed→Wheelchair)
10Groom
11Dress T-shirt (Wheelchair)
12Undress T-shirt (Wheelchair)
13Dress Vest (Wheelchair)
14Undress Vest (Wheelchair)
15Transfer (Wheelchair→Bed)

Manikin Types

The dataset includes demonstrations performed on two different hospital manikins to capture variations in caregiving techniques:

Both manikins feature anthropomorphic dimensions and joints commonly used in OT clinical training. This variation enables research into how caregiving strategies adapt based on patient size, weight, and physical characteristics in full-assistance scenarios.

Potential Use Cases

OpenRoboCare supports diverse research applications across computer vision and robotics. The rich multimodal data and challenging real-world scenarios provide opportunities for advancing algorithms and gaining fundamental insights into physical caregiving interactions.

Robotics Applications

Computer Vision Applications

Explore the Dataset

Browse the dataset interactively to understand its structure and content:

Interactive Dataset Viewer

Launch Dataset Viewer →

Getting Started

  1. Explore the interactive viewer above to understand the dataset structure
  2. Review the Dataset Organization page for detailed file formats
  3. Visit the Download page for access instructions

Citation

@inproceedings{Liang2025OpenRoboCare,
  title={{OpenRoboCare: A Multimodal Multi-Task Expert Demonstration Dataset for Robot Caregiving}},
  author={Xiaoyu Liang and Ziang Liu and Kelvin Lin and Edward Gu and Ruolin Ye and Tam Nguyen and Cynthia Hsu and Zhanxin Wu and Xiaoman Yang and Christy Sum Yu Cheung and Harold Soh and Katherine Dimitropoulou and Tapomayukh Bhattacharjee},
  booktitle={IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2025},
  url={https://emprise.cs.cornell.edu/robo-care/}
}

Contact

For questions or collaboration inquiries, please visit the project website.