Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.17016

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2506.17016 (cs)
[Submitted on 20 Jun 2025 ]

Title: The Hidden Cost of an Image: Quantifying the Energy Consumption of AI Image Generation

Title: 图像的隐性成本:量化人工智能图像生成的能源消耗

Authors:Giulia Bertazzini, Chiara Albisani, Daniele Baracchi, Dasara Shullani, Roberto Verdecchia
Abstract: With the growing adoption of AI image generation, in conjunction with the ever-increasing environmental resources demanded by AI, we are urged to answer a fundamental question: What is the environmental impact hidden behind each image we generate? In this research, we present a comprehensive empirical experiment designed to assess the energy consumption of AI image generation. Our experiment compares 17 state-of-the-art image generation models by considering multiple factors that could affect their energy consumption, such as model quantization, image resolution, and prompt length. Additionally, we consider established image quality metrics to study potential trade-offs between energy consumption and generated image quality. Results show that image generation models vary drastically in terms of the energy they consume, with up to a 46x difference. Image resolution affects energy consumption inconsistently, ranging from a 1.3x to 4.7x increase when doubling resolution. U-Net-based models tend to consume less than Transformer-based one. Model quantization instead results to deteriorate the energy efficiency of most models, while prompt length and content have no statistically significant impact. Improving image quality does not always come at the cost of a higher energy consumption, with some of the models producing the highest quality images also being among the most energy efficient ones.
Abstract: 随着AI图像生成的日益普及,以及AI对环境资源需求的不断增加,我们被敦促回答一个基本问题:隐藏在我们生成的每张图片背后的环境影响是什么? 在这项研究中,我们设计了一个全面的经验实验,旨在评估AI图像生成的能量消耗。 我们的实验通过考虑可能影响模型能耗的多个因素(如模型量化、图像分辨率和提示长度)来比较17种最先进的图像生成模型。 此外,我们考虑了已建立的图像质量度量标准,以研究能耗与生成图像质量之间的潜在权衡。 结果显示,图像生成模型在能耗方面存在巨大差异,最高可达46倍。 图像分辨率对能耗的影响不一致,当分辨率翻倍时,能耗增加了1.3倍到4.7倍。 基于U-Net的模型倾向于比基于Transformer的模型消耗更少的能量。 模型量化反而导致大多数模型的能量效率下降,而提示长度和内容没有统计学上显著的影响。 提高图像质量并不总是以更高的能量消耗为代价,一些生成最高质量图像的模型同时也属于最节能的模型。
Subjects: Machine Learning (cs.LG) ; Multimedia (cs.MM)
Cite as: arXiv:2506.17016 [cs.LG]
  (or arXiv:2506.17016v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2506.17016
arXiv-issued DOI via DataCite

Submission history

From: Dasara Shullani [view email]
[v1] Fri, 20 Jun 2025 14:13:52 UTC (7,638 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.MM

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号