Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2504.02747

Help | Advanced Search

Computer Science > Graphics

arXiv:2504.02747 (cs)
[Submitted on 3 Apr 2025 ]

Title: GEOPARD: Geometric Pretraining for Articulation Prediction in 3D Shapes

Title: GEOPARD:用于三维形状运动预测的几何预训练

Authors:Pradyumn Goyal, Dmitry Petrov, Sheldon Andrews, Yizhak Ben-Shabat, Hsueh-Ti Derek Liu, Evangelos Kalogerakis
Abstract: We present GEOPARD, a transformer-based architecture for predicting articulation from a single static snapshot of a 3D shape. The key idea of our method is a pretraining strategy that allows our transformer to learn plausible candidate articulations for 3D shapes based on a geometric-driven search without manual articulation annotation. The search automatically discovers physically valid part motions that do not cause detachments or collisions with other shape parts. Our experiments indicate that this geometric pretraining strategy, along with carefully designed choices in our transformer architecture, yields state-of-the-art results in articulation inference in the PartNet-Mobility dataset.
Abstract: 我们提出了GEOPARD,这是一种基于变换器的架构,用于从3D形状的单一静态快照预测其关节活动。 我们方法的关键思想是一种预训练策略,允许我们的变换器根据几何驱动的搜索学习3D形状的合理候选关节活动,而无需人工标注关节活动。 搜索会自动发现不会导致分离或与其他形状部分发生碰撞的物理有效的部件运动。 我们的实验表明,这种几何预训练策略,加上我们在变换器架构中精心设计的选择,在PartNet-Mobility数据集的关节活动推断方面取得了最先进的结果。
Subjects: Graphics (cs.GR)
Cite as: arXiv:2504.02747 [cs.GR]
  (or arXiv:2504.02747v1 [cs.GR] for this version)
  https://doi.org/10.48550/arXiv.2504.02747
arXiv-issued DOI via DataCite

Submission history

From: Pradyumn Goyal [view email]
[v1] Thu, 3 Apr 2025 16:35:17 UTC (14,584 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.GR
< prev   |   next >
new | recent | 2025-04
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号