Robot-assisted feeding requires reliable bite acquisition, a challenging task due to the complex interactions between utensils and food with diverse physical properties. These interactions are further complicated by the temporal variability of food properties—for example, steak becomes firm as it cools even during a meal. To address this, we propose SAVOR, a novel approach for learning skill affordances for bite acquisition—how suitable a manipulation skill (e.g., skewering, scooping) is for a given utensil-food interaction. In our formulation, skill affordances arise from the combination of tool affordances (what a utensil can do) and food affordances (what the food allows). Tool affordances are learned offline through calibration, where different utensils interact with a variety of foods to model their functional capabilities. Food affordances are characterized by physical properties such as softness, moisture, and viscosity, initially inferred through commonsense reasoning using a visually-conditioned language model and then dynamically refined through online multi-modal visuo-haptic perception using SAVOR-Net during interaction. Our method integrates these offline and online estimates to predict skill affordances in real time, enabling the robot to select the most appropriate skill for each food item. Evaluated on 20 single-item foods and 10 in-the-wild meals, our approach improves bite acquisition success by 13% over state-of-the-art (SOTA) category-based methods (e.g. use skewer for fruits). These results highlight the importance of modeling interaction-driven skill affordances for generalizable and effective robot-assisted bite acquisition.
Bite acquisition, the process of picking up a food item from a plate or bowl, is a critical step in robot-assisted feeding, but it is highly challenging: (i) Food items exhibit diverse and temporally variable physical properties (e.g., rice becomes firm when it cools down, and tofu is fragile and can break without careful manipulation). (ii) Physical interaction with food items varies significantly for different utensils.
Key Insight
To address these challenges, a robot must reason about three types of affordances. First, food affordances describe what the food allows, such as whether the food can be skewered or scooped. Second, tool affordances characterize what a utensil can do, given its functionality. Skill affordances arise from reasoning jointly over food and tool affordances and capture whether a manipulation skill is appropriate, given the food’s physical properties and the tool’s capabilities.
Before deployment, we perform an offline tool calibration to understand tool affordances. During deployment, we first use a visually-conditioned language model to estimate food physical properties and then refine it through online visuo-haptic perception.