Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2509.14510

Help | Advanced Search

Computer Science > Robotics

arXiv:2509.14510 (cs)
[Submitted on 18 Sep 2025 ]

Title: Object Recognition and Force Estimation with the GelSight Baby Fin Ray

Title: 基于GelSight Baby Fin Ray的目标识别与力估计

Authors:Sandra Q. Liu, Yuxiang Ma, Edward H. Adelson
Abstract: Recent advances in soft robotic hands and tactile sensing have enabled both to perform an increasing number of complex tasks with the aid of machine learning. In particular, we presented the GelSight Baby Fin Ray in our previous work, which integrates a camera with a soft, compliant Fin Ray structure. Camera-based tactile sensing gives the GelSight Baby Fin Ray the ability to capture rich contact information like forces, object geometries, and textures. Moreover, our previous work showed that the GelSight Baby Fin Ray can dig through clutter, and classify in-shell nuts. To further examine the potential of the GelSight Baby Fin Ray, we leverage learning to distinguish nut-in-shell textures and to perform force and position estimation. We implement ablation studies with popular neural network structures, including ResNet50, GoogLeNet, and 3- and 5-layer convolutional neural network (CNN) structures. We conclude that machine learning is a promising technique to extract useful information from high-resolution tactile images and empower soft robotics to better understand and interact with the environments.
Abstract: 近年来,软体机械手和触觉感知的进展使得它们在机器学习的帮助下能够执行越来越多的复杂任务。 特别是,在我们之前的工作中,我们介绍了GelSight Baby Fin Ray,它将相机与柔软、顺应的Fin Ray结构结合在一起。 基于相机的触觉感知使GelSight Baby Fin Ray能够捕捉丰富的接触信息,如力、物体几何形状和纹理。 此外,我们之前的工作表明,GelSight Baby Fin Ray可以穿过杂乱的环境,并对壳内坚果进行分类。 为了进一步检验GelSight Baby Fin Ray的潜力,我们利用学习来区分壳内坚果的纹理以及进行力和位置估计。 我们使用流行的神经网络结构进行了消融研究,包括ResNet50、GoogLeNet以及3层和5层卷积神经网络(CNN)结构。 我们得出结论,机器学习是一种有前景的技术,可以从高分辨率触觉图像中提取有用信息,并增强软体机器人更好地理解和与环境互动的能力。
Comments: Presented at CoRL 2023 as part of the workshop, "Learning for Soft Robots: Hard Challenges for Soft Systems" (website: https://sites.google.com/view/corl-2023-soft-robots-ws)
Subjects: Robotics (cs.RO)
Cite as: arXiv:2509.14510 [cs.RO]
  (or arXiv:2509.14510v1 [cs.RO] for this version)
  https://doi.org/10.48550/arXiv.2509.14510
arXiv-issued DOI via DataCite

Submission history

From: Sandra Liu [view email]
[v1] Thu, 18 Sep 2025 00:52:56 UTC (4,880 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.RO
< prev   |   next >
new | recent | 2025-09
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号