Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.02353

Help | Advanced Search

Computer Science > Robotics

arXiv:2506.02353 (cs)
[Submitted on 3 Jun 2025 (v1) , last revised 1 Sep 2025 (this version, v2)]

Title: SAVOR: Skill Affordance Learning from Visuo-Haptic Perception for Robot-Assisted Bite Acquisition

Title: SAVOR:从视觉触觉感知中学习技能效用以实现机器人辅助咬合获取

Authors:Zhanxin Wu, Bo Ai, Tom Silver, Tapomayukh Bhattacharjee
Abstract: Robot-assisted feeding requires reliable bite acquisition, a challenging task due to the complex interactions between utensils and food with diverse physical properties. These interactions are further complicated by the temporal variability of food properties-for example, steak becomes firm as it cools even during a meal. To address this, we propose SAVOR, a novel approach for learning skill affordances for bite acquisition-how suitable a manipulation skill (e.g., skewering, scooping) is for a given utensil-food interaction. In our formulation, skill affordances arise from the combination of tool affordances (what a utensil can do) and food affordances (what the food allows). Tool affordances are learned offline through calibration, where different utensils interact with a variety of foods to model their functional capabilities. Food affordances are characterized by physical properties such as softness, moisture, and viscosity, initially inferred through commonsense reasoning using a visually-conditioned language model and then dynamically refined through online multi-modal visuo-haptic perception using SAVOR-Net during interaction. Our method integrates these offline and online estimates to predict skill affordances in real time, enabling the robot to select the most appropriate skill for each food item. Evaluated on 20 single-item foods and 10 in-the-wild meals, our approach improves bite acquisition success rate by 13% over state-of-the-art (SOTA) category-based methods (e.g. use skewer for fruits). These results highlight the importance of modeling interaction-driven skill affordances for generalizable and effective robot-assisted bite acquisition. Website: https://emprise.cs.cornell.edu/savor/
Abstract: 机器人辅助进食需要可靠的咬合获取,由于餐具和具有不同物理特性的食物之间的复杂相互作用,这是一个具有挑战性的任务。 这些相互作用进一步因食物特性的时变性而变得更加复杂——例如,牛排在进餐过程中冷却时会变硬。 为了解决这个问题,我们提出了SAVOR,一种新的方法,用于学习咬合获取的技能效用——即某种操作技能(如穿刺、舀取)对于给定的餐具-食物交互有多适合。 在我们的表述中,技能效用来自于工具效用(餐具能做什么)和食物效用(食物允许什么)的结合。 工具效用通过离线校准学习,不同餐具与各种食物交互以建模其功能能力。 食物效用由物理特性(如柔软度、湿度和粘度)表征,最初通过使用视觉条件语言模型进行常识推理推断,然后在交互过程中通过SAVOR-Net进行在线多模态视觉触觉感知动态优化。 我们的方法将这些离线和在线估计结合起来,实时预测技能效用,使机器人能够为每个食物项选择最合适的技能。 在20种单个食物和10种野外餐食上评估,我们的方法比最先进的(SOTA)基于类别的方法(例如,用叉子吃水果)提高了13%的咬合获取成功率。 这些结果突显了对交互驱动的技能效用进行建模在可推广和有效的机器人辅助咬合获取中的重要性。 网站:https://emprise.cs.cornell.edu/savor/
Comments: Conference on Robot Learning, Oral
Subjects: Robotics (cs.RO)
Cite as: arXiv:2506.02353 [cs.RO]
  (or arXiv:2506.02353v2 [cs.RO] for this version)
  https://doi.org/10.48550/arXiv.2506.02353
arXiv-issued DOI via DataCite

Submission history

From: Zhanxin Wu [view email]
[v1] Tue, 3 Jun 2025 01:14:45 UTC (10,292 KB)
[v2] Mon, 1 Sep 2025 14:56:48 UTC (10,920 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
view license
Current browse context:
cs
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs.RO

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号