Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2504.01483

Help | Advanced Search

Computer Science > Graphics

arXiv:2504.01483 (cs)
[Submitted on 2 Apr 2025 (v1) , last revised 9 Jun 2025 (this version, v3)]

Title: GarmageNet: A Multimodal Generative Framework for Sewing Pattern Design and Generic Garment Modeling

Title: GarmageNet:一种用于缝纫图案设计和通用服装建模的多模态生成框架

Authors:Siran Li, Chen Liu, Ruiyang Liu, Zhendong Wang, Gaofeng He, Yong-Lu Li, Xiaogang Jin, Huamin Wang
Abstract: Realistic digital garment modeling remains a labor-intensive task due to the intricate process of translating 2D sewing patterns into high-fidelity, simulation-ready 3D garments. We introduce GarmageNet, a unified generative framework that automates the creation of 2D sewing patterns, the construction of sewing relationships, and the synthesis of 3D garment initializations compatible with physics-based simulation. Central to our approach is Garmage, a novel garment representation that encodes each panel as a structured geometry image, effectively bridging the semantic and geometric gap between 2D structural patterns and 3D garment shapes. GarmageNet employs a latent diffusion transformer to synthesize panel-wise geometry images and integrates GarmageJigsaw, a neural module for predicting point-to-point sewing connections along panel contours. To support training and evaluation, we build GarmageSet, a large-scale dataset comprising over 10,000 professionally designed garments with detailed structural and style annotations. Our method demonstrates versatility and efficacy across multiple application scenarios, including scalable garment generation from multi-modal design concepts (text prompts, sketches, photographs), automatic modeling from raw flat sewing patterns, pattern recovery from unstructured point clouds, and progressive garment editing using conventional instructions-laying the foundation for fully automated, production-ready pipelines in digital fashion. Project page: https://style3d.github.io/garmagenet.
Abstract: 由于将二维缝纫图案转化为高保真、可模拟的三维服装的过程复杂繁琐,逼真的数字服装建模仍然是一项劳动密集型任务。 我们引入了GarmageNet,这是一种统一的生成框架,能够自动化地创建二维缝纫图案、构建缝合关系,并合成与基于物理模拟兼容的三维服装初始模型。 我们的方法的核心是Garmage,这是一种新颖的服装表示方式,它将每个面板编码为结构化的几何图像,有效地弥合了二维结构化图案和三维服装形状之间的语义和几何鸿沟。 GarmageNet采用潜在扩散变换器来合成面板级的几何图像,并集成了GarmageJigsaw,这是一个用于沿面板轮廓预测点对点缝合连接的神经模块。 为了支持训练和评估,我们构建了GarmageSet,这是一个包含超过10,000件专业设计服装的大规模数据集,具有详细的结构和风格标注。 我们的方法在多个应用场景中展示了多样性和有效性,包括从多模态设计概念(文本提示、草图、照片)生成可扩展的服装、从原始平面缝纫图案自动建模、从无结构点云恢复图案以及使用传统指令进行逐步服装编辑,为数字时尚领域的全自动、生产就绪管道奠定了基础。 项目页面:https://style3d.github.io/garmagenet。
Subjects: Graphics (cs.GR) ; Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2504.01483 [cs.GR]
  (or arXiv:2504.01483v3 [cs.GR] for this version)
  https://doi.org/10.48550/arXiv.2504.01483
arXiv-issued DOI via DataCite

Submission history

From: Ruiyang Liu [view email]
[v1] Wed, 2 Apr 2025 08:37:32 UTC (22,716 KB)
[v2] Thu, 5 Jun 2025 08:21:51 UTC (40,692 KB)
[v3] Mon, 9 Jun 2025 11:06:19 UTC (46,393 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
view license
Current browse context:
cs.GR
< prev   |   next >
new | recent | 2025-04
Change to browse by:
cs
cs.CV

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号