Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2507.11069v1

Help | Advanced Search

Computer Science > Robotics

arXiv:2507.11069v1 (cs)
[Submitted on 15 Jul 2025 (this version) , latest version 26 Aug 2025 (v3) ]

Title: TRAN-D: 2D Gaussian Splatting-based Sparse-view Transparent Object Depth Reconstruction via Physics Simulation for Scene Update

Title: TRAN-D:基于2D高斯点云的稀疏视图透明物体深度重建方法,通过物理模拟实现场景更新

Authors:Jeongyun Kim, Seunghoon Jeong, Giseop Kim, Myung-Hwan Jeon, Eunji Jun, Ayoung Kim
Abstract: Understanding the 3D geometry of transparent objects from RGB images is challenging due to their inherent physical properties, such as reflection and refraction. To address these difficulties, especially in scenarios with sparse views and dynamic environments, we introduce TRAN-D, a novel 2D Gaussian Splatting-based depth reconstruction method for transparent objects. Our key insight lies in separating transparent objects from the background, enabling focused optimization of Gaussians corresponding to the object. We mitigate artifacts with an object-aware loss that places Gaussians in obscured regions, ensuring coverage of invisible surfaces while reducing overfitting. Furthermore, we incorporate a physics-based simulation that refines the reconstruction in just a few seconds, effectively handling object removal and chain-reaction movement of remaining objects without the need for rescanning. TRAN-D is evaluated on both synthetic and real-world sequences, and it consistently demonstrated robust improvements over existing GS-based state-of-the-art methods. In comparison with baselines, TRAN-D reduces the mean absolute error by over 39% for the synthetic TRansPose sequences. Furthermore, despite being updated using only one image, TRAN-D reaches a {\delta} < 2.5 cm accuracy of 48.46%, over 1.5 times that of baselines, which uses six images. Code and more results are available at https://jeongyun0609.github.io/TRAN-D/.
Abstract: 理解透明物体的3D几何结构从RGB图像中是具有挑战性的,这是由于它们固有的物理特性,如反射和折射。 为了解决这些困难,特别是在视图稀疏和动态环境的情况下,我们引入了TRAN-D,一种基于2D高斯点云的深度重建方法,用于透明物体。 我们的关键见解在于将透明物体与背景分离,从而专注于优化对应于物体的高斯分布。 我们通过一个对象感知损失来减轻伪影,将高斯分布在被遮挡区域,确保覆盖不可见表面的同时减少过拟合。 此外,我们引入了一个基于物理的模拟,在几秒钟内精炼重建,有效地处理物体移除和剩余物体的连锁运动,而无需重新扫描。 TRAN-D在合成和现实世界序列上进行了评估,并且始终表现出对现有基于GS的最先进方法的鲁棒改进。 与基线相比,TRAN-D在合成TRansPose序列上的平均绝对误差减少了超过39%。 此外,尽管仅使用一张图像进行更新,TRAN-D达到了{\delta } < 2.5 cm的准确率48.46%,是使用六张图像的基线的1.5倍以上。 代码和更多结果可在https://jeongyun0609.github.io/TRAN-D/获取。
Subjects: Robotics (cs.RO) ; Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2507.11069 [cs.RO]
  (or arXiv:2507.11069v1 [cs.RO] for this version)
  https://doi.org/10.48550/arXiv.2507.11069
arXiv-issued DOI via DataCite

Submission history

From: Jeongyun Kim [view email]
[v1] Tue, 15 Jul 2025 08:02:37 UTC (31,608 KB)
[v2] Wed, 16 Jul 2025 12:02:03 UTC (31,608 KB)
[v3] Tue, 26 Aug 2025 04:10:34 UTC (27,209 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • Other Formats
license icon view license
Current browse context:
cs.RO
< prev   |   next >
new | recent | 2025-07
Change to browse by:
cs
cs.CV

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号