Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2509.21541

Help | Advanced Search

Computer Science > Graphics

arXiv:2509.21541 (cs)
[Submitted on 25 Sep 2025 ]

Title: ControlHair: Physically-based Video Diffusion for Controllable Dynamic Hair Rendering

Title: ControlHair:可控制动态头发渲染的物理基础视频扩散

Authors:Weikai Lin, Haoxiang Li, Yuhao Zhu
Abstract: Hair simulation and rendering are challenging due to complex strand dynamics, diverse material properties, and intricate light-hair interactions. Recent video diffusion models can generate high-quality videos, but they lack fine-grained control over hair dynamics. We present ControlHair, a hybrid framework that integrates a physics simulator with conditional video diffusion to enable controllable dynamic hair rendering. ControlHair adopts a three-stage pipeline: it first encodes physics parameters (e.g., hair stiffness, wind) into per-frame geometry using a simulator, then extracts per-frame control signals, and finally feeds control signals into a video diffusion model to generate videos with desired hair dynamics. This cascaded design decouples physics reasoning from video generation, supports diverse physics, and makes training the video diffusion model easy. Trained on a curated 10K video dataset, ControlHair outperforms text- and pose-conditioned baselines, delivering precisely controlled hair dynamics. We further demonstrate three use cases of ControlHair: dynamic hairstyle try-on, bullet-time effects, and cinemagraphic. ControlHair introduces the first physics-informed video diffusion framework for controllable dynamics. We provide a teaser video and experimental results on our website.
Abstract: 头发模拟和渲染由于复杂的纤维动力学、多样的材料属性以及复杂的光线-头发相互作用而具有挑战性。 最近的视频扩散模型可以生成高质量的视频,但它们在头发动力学方面缺乏细粒度控制。 我们提出了ControlHair,这是一种混合框架,将物理模拟器与条件视频扩散模型相结合,以实现可控的动态头发渲染。 ControlHair采用三阶段流程:首先使用模拟器将物理参数(例如,头发硬度、风力)编码为每帧几何形状,然后提取每帧的控制信号,最后将控制信号输入视频扩散模型以生成具有所需头发动力学的视频。 这种级联设计将物理推理与视频生成解耦,支持多种物理效果,并使视频扩散模型的训练变得容易。 在整理好的10K视频数据集上进行训练, ControlHair优于文本和姿态条件基线,实现了精确控制的头发动力学。 我们进一步展示了ControlHair的三个应用场景:动态发型试戴、子弹时间效果和电影图形。 ControlHair引入了第一个基于物理信息的可控动力学视频扩散框架。 我们在网站上提供了演示视频和实验结果。
Comments: 9 pages,Project website: https://ctrlhair-arxiv.netlify.app/
Subjects: Graphics (cs.GR) ; Computer Vision and Pattern Recognition (cs.CV)
ACM classes: I.3; I.2; I.4
Cite as: arXiv:2509.21541 [cs.GR]
  (or arXiv:2509.21541v1 [cs.GR] for this version)
  https://doi.org/10.48550/arXiv.2509.21541
arXiv-issued DOI via DataCite (pending registration)

Submission history

From: Weikai Lin [view email]
[v1] Thu, 25 Sep 2025 20:29:05 UTC (5,466 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.GR
< prev   |   next >
new | recent | 2025-09
Change to browse by:
cs
cs.CV

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号