Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2504.05740

Help | Advanced Search

Computer Science > Graphics

arXiv:2504.05740 (cs)
[Submitted on 8 Apr 2025 (v1) , last revised 2 Sep 2025 (this version, v2)]

Title: Micro-splatting: Multistage Isotropy-informed Covariance Regularization Optimization for High-Fidelity 3D Gaussian Splatting

Title: 微点绘制:用于高保真3D高斯点绘制的多阶段各向同性信息协方差正则化优化

Authors:Jee Won Lee, Hansol Lim, Sooyeun Yang, Jongseong Brad Choi
Abstract: High-fidelity 3D Gaussian Splatting methods excel at capturing fine textures but often overlook model compactness, resulting in massive splat counts, bloated memory, long training, and complex post-processing. We present Micro-Splatting: Two-Stage Adaptive Growth and Refinement, a unified, in-training pipeline that preserves visual detail while drastically reducing model complexity without any post-processing or auxiliary neural modules. In Stage I (Growth), we introduce a trace-based covariance regularization to maintain near-isotropic Gaussians, mitigating low-pass filtering in high-frequency regions and improving spherical-harmonic color fitting. We then apply gradient-guided adaptive densification that subdivides splats only in visually complex regions, leaving smooth areas sparse. In Stage II (Refinement), we prune low-impact splats using a simple opacity-scale importance score and merge redundant neighbors via lightweight spatial and feature thresholds, producing a lean yet detail-rich model. On four object-centric benchmarks, Micro-Splatting reduces splat count and model size by up to 60% and shortens training by 20%, while matching or surpassing state-of-the-art PSNR, SSIM, and LPIPS in real-time rendering. These results demonstrate that Micro-Splatting delivers both compactness and high fidelity in a single, efficient, end-to-end framework.
Abstract: 高保真3D高斯点云方法在捕捉精细纹理方面表现出色,但往往忽视模型的紧凑性,导致大量的点云数量、内存膨胀、训练时间长和复杂的后期处理。 我们提出了Micro-Splatting:两阶段自适应增长与优化,这是一种统一的训练中流程,在不进行任何后期处理或辅助神经模块的情况下,保留视觉细节的同时显著降低模型复杂度。 在第一阶段(增长)中,我们引入基于轨迹的协方差正则化,以保持近各向同性的高斯分布,减轻高频区域的低通滤波,并改善球面谐波颜色拟合。 然后我们应用基于梯度引导的自适应密集化,仅在视觉复杂区域细分点云,使平滑区域保持稀疏。 在第二阶段(优化)中,我们使用简单的透明度尺度重要性评分来修剪低影响点云,并通过轻量级的空间和特征阈值合并冗余邻居,生成一个精简但细节丰富的模型。 在四个以物体为中心的基准测试中,Micro-Splatting将点云数量和模型大小减少了高达60%,并将训练时间缩短了20%,同时在实时渲染中达到或超过了最先进的PSNR、SSIM和LPIPS指标。 这些结果表明,Micro-Splatting在一个高效、端到端的框架中实现了紧凑性和高保真度。
Comments: This work has been submitted to journal for potential publication
Subjects: Graphics (cs.GR) ; Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2504.05740 [cs.GR]
  (or arXiv:2504.05740v2 [cs.GR] for this version)
  https://doi.org/10.48550/arXiv.2504.05740
arXiv-issued DOI via DataCite

Submission history

From: Hansol Lim [view email]
[v1] Tue, 8 Apr 2025 07:15:58 UTC (1,244 KB)
[v2] Tue, 2 Sep 2025 10:05:44 UTC (27,059 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.GR
< prev   |   next >
new | recent | 2025-04
Change to browse by:
cs
cs.CV

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号