Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.01941v1

Help | Advanced Search

Computer Science > Robotics

arXiv:2506.01941v1 (cs)
[Submitted on 2 Jun 2025 ]

Title: FreeTacMan: Robot-free Visuo-Tactile Data Collection System for Contact-rich Manipulation

Title: FreeTacMan:用于富接触操作的无机器人视觉-触觉数据收集系统

Authors:Longyan Wu, Checheng Yu, Jieji Ren, Li Chen, Ran Huang, Guoying Gu, Hongyang Li
Abstract: Enabling robots with contact-rich manipulation remains a pivotal challenge in robot learning, which is substantially hindered by the data collection gap, including its inefficiency and limited sensor setup. While prior work has explored handheld paradigms, their rod-based mechanical structures remain rigid and unintuitive, providing limited tactile feedback and posing challenges for human operators. Motivated by the dexterity and force feedback of human motion, we propose FreeTacMan, a human-centric and robot-free data collection system for accurate and efficient robot manipulation. Concretely, we design a wearable data collection device with dual visuo-tactile grippers, which can be worn by human fingers for intuitive and natural control. A high-precision optical tracking system is introduced to capture end-effector poses, while synchronizing visual and tactile feedback simultaneously. FreeTacMan achieves multiple improvements in data collection performance compared to prior works, and enables effective policy learning for contact-rich manipulation tasks with the help of the visuo-tactile information. We will release the work to facilitate reproducibility and accelerate research in visuo-tactile manipulation.
Abstract: 在机器人学习中,赋予机器人丰富的接触操作能力仍然是一个关键挑战,而数据收集差距(包括其低效性和有限的传感器设置)严重阻碍了这一进程。尽管先前的研究探索了手持范式,但基于杆的机械结构仍然僵化且不直观,提供的触觉反馈有限,并给人类操作员带来了挑战。 受人类运动的灵巧性和力反馈的启发,我们提出了FreeTacMan,这是一种以人为中心且无需机器人的数据收集系统,旨在实现准确高效的机器人操作。具体而言,我们设计了一种带有双视觉-触觉夹爪的可穿戴数据收集设备,可以佩戴在人类手指上,以实现直观和自然的控制。 引入了一个高精度光学跟踪系统来捕捉末端执行器的姿态,同时同步视觉和触觉反馈。与之前的工作相比,FreeTacMan在数据收集性能方面实现了多项改进,并通过视觉-触觉信息有效促进了丰富接触操作任务的策略学习。 我们将发布这项工作,以促进视觉-触觉操作的可重复性和研究的加速发展。
Subjects: Robotics (cs.RO)
Cite as: arXiv:2506.01941 [cs.RO]
  (or arXiv:2506.01941v1 [cs.RO] for this version)
  https://doi.org/10.48550/arXiv.2506.01941
arXiv-issued DOI via DataCite

Submission history

From: Longyan Wu [view email]
[v1] Mon, 2 Jun 2025 17:55:23 UTC (10,451 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.RO
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号