Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2504.02361v2

Help | Advanced Search

Computer Science > Graphics

arXiv:2504.02361v2 (cs)
[Submitted on 3 Apr 2025 (v1) , revised 4 Apr 2025 (this version, v2) , latest version 14 Jul 2025 (v3) ]

Title: MG-Gen: Single Image to Motion Graphics Generation with Layer Decomposition

Title: MG-Gen:基于分层分解的单图像到运动图形生成

Authors:Takahiro Shirakawa, Tomoyuki Suzuki, Daichi Haraguchi
Abstract: General image-to-video generation methods often produce suboptimal animations that do not meet the requirements of animated graphics, as they lack active text motion and exhibit object distortion. Also, code-based animation generation methods typically require layer-structured vector data which are often not readily available for motion graphic generation. To address these challenges, we propose a novel framework named MG-Gen that reconstructs data in vector format from a single raster image to extend the capabilities of code-based methods to enable motion graphics generation from a raster image in the framework of general image-to-video generation. MG-Gen first decomposes the input image into layer-wise elements, reconstructs them as HTML format data and then generates executable JavaScript code for the reconstructed HTML data. We experimentally confirm that MG-Gen generates motion graphics while preserving text readability and input consistency. These successful results indicate that combining layer decomposition and animation code generation is an effective strategy for motion graphics generation.
Abstract: 通用图像到视频生成方法往往会产生次优的动画,这些动画不符合动画图形的需求,因为它们缺乏主动的文本运动且存在物体失真现象。 此外,基于代码的动画生成方法通常需要分层结构的矢量数据,而这种数据在运动图形生成中往往不易获得。 为了解决这些挑战,我们提出了一种名为MG-Gen的新框架,该框架从单个光栅图像中重构矢量格式的数据,以扩展基于代码的方法的能力,从而能够在通用图像到视频生成框架中从光栅图像生成运动图形。 MG-Gen首先将输入图像分解为逐层元素,将其重构为HTML格式数据,然后为重构后的HTML数据生成可执行的JavaScript代码。 我们通过实验验证了MG-Gen能够生成运动图形的同时保留文本的可读性和输入的一致性。 这些成功的成果表明,结合分层分解和动画代码生成是一种有效的运动图形生成策略。
Subjects: Graphics (cs.GR) ; Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2504.02361 [cs.GR]
  (or arXiv:2504.02361v2 [cs.GR] for this version)
  https://doi.org/10.48550/arXiv.2504.02361
arXiv-issued DOI via DataCite

Submission history

From: Takahiro Shirakawa [view email]
[v1] Thu, 3 Apr 2025 07:52:12 UTC (11,372 KB)
[v2] Fri, 4 Apr 2025 01:21:39 UTC (11,372 KB)
[v3] Mon, 14 Jul 2025 05:22:55 UTC (10,040 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • Other Formats
view license
Current browse context:
cs.GR
< prev   |   next >
new | recent | 2025-04
Change to browse by:
cs
cs.CV

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号