Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2504.04000

Help | Advanced Search

Computer Science > Graphics

arXiv:2504.04000 (cs)
[Submitted on 5 Apr 2025 ]

Title: View2CAD: Reconstructing View-Centric CAD Models from Single RGB-D Scans

Title: View2CAD:从单个RGB-D扫描重建视点为中心的CAD模型

Authors:James Noeckel, Benjamin Jones, Adriana Schulz, Brian Curless
Abstract: Parametric CAD models, represented as Boundary Representations (B-reps), are foundational to modern design and manufacturing workflows, offering the precision and topological breakdown required for downstream tasks such as analysis, editing, and fabrication. However, B-Reps are often inaccessible due to conversion to more standardized, less expressive geometry formats. Existing methods to recover B-Reps from measured data require complete, noise-free 3D data, which are laborious to obtain. We alleviate this difficulty by enabling the precise reconstruction of CAD shapes from a single RGB-D image. We propose a method that addresses the challenge of reconstructing only the observed geometry from a single view. To allow for these partial observations, and to avoid hallucinating incorrect geometry, we introduce a novel view-centric B-rep (VB-Rep) representation, which incorporates structures to handle visibility limits and encode geometric uncertainty. We combine panoptic image segmentation with iterative geometric optimization to refine and improve the reconstruction process. Our results demonstrate high-quality reconstruction on synthetic and real RGB-D data, showing that our method can bridge the reality gap.
Abstract: 参数化CAD模型以边界表示法(B-rep)的形式表示,是现代设计和制造工作流的基础,提供了下游任务(如分析、编辑和制造)所需的精度和拓扑分解。然而,由于转换为更标准化但表达能力较弱的几何格式,B-rep常常无法访问。现有从测量数据恢复B-rep的方法需要完整且无噪声的3D数据,而这些数据获取起来非常耗时。我们通过从单张RGB-D图像精确重建CAD形状来缓解这一困难。我们提出了一种方法,解决从单一视角重建仅观察到的几何形状的挑战。为了允许这些部分观测,并避免产生错误的几何形状,我们引入了一种新的视点中心B-rep(VB-rep)表示法,它包含处理可见性限制和编码几何不确定性的结构。我们将全景图像分割与迭代几何优化相结合,以细化和改进重建过程。我们的结果显示了在合成和真实RGB-D数据上的高质量重建,表明我们的方法可以弥合现实差距。
Subjects: Graphics (cs.GR) ; Computer Vision and Pattern Recognition (cs.CV)
ACM classes: I.3.5
Cite as: arXiv:2504.04000 [cs.GR]
  (or arXiv:2504.04000v1 [cs.GR] for this version)
  https://doi.org/10.48550/arXiv.2504.04000
arXiv-issued DOI via DataCite

Submission history

From: Benjamin Jones [view email]
[v1] Sat, 5 Apr 2025 00:10:50 UTC (25,608 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.GR
< prev   |   next >
new | recent | 2025-04
Change to browse by:
cs
cs.CV

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号