Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > eess > arXiv:1911.09428v1

Help | Advanced Search

Electrical Engineering and Systems Science > Image and Video Processing

arXiv:1911.09428v1 (eess)
[Submitted on 21 Nov 2019 ]

Title: Single Image Super Resolution based on a Modified U-net with Mixed Gradient Loss

Title: 基于改进的U-net和混合梯度损失的单图像超分辨率

Authors:Zhengyang Lu, Ying Chen
Abstract: Single image super-resolution (SISR) is the task of inferring a high-resolution image from a single low-resolution image. Recent research on super-resolution has achieved great progress due to the development of deep convolutional neural networks in the field of computer vision. Existing super-resolution reconstruction methods have high performances in the criterion of Mean Square Error (MSE) but most methods fail to reconstruct an image with shape edges. To solve this problem, the mixed gradient error, which is composed by MSE and a weighted mean gradient error, is proposed in this work and applied to a modified U-net network as the loss function. The modified U-net removes all batch normalization layers and one of the convolution layers in each block. The operation reduces the number of parameters, and therefore accelerates the reconstruction. Compared with the existing image super-resolution algorithms, the proposed reconstruction method has better performance and time consumption. The experiments demonstrate that modified U-net network architecture with mixed gradient loss yields high-level results on three image datasets: SET14, BSD300, ICDAR2003. Code is available online.
Abstract: 单图像超分辨率(SISR)是从单个低分辨率图像推断出高分辨率图像的任务。 由于计算机视觉领域深度卷积神经网络的发展,超分辨率的最新研究取得了巨大进展。 现有的超分辨率重建方法在均方误差(MSE)标准下表现良好,但大多数方法无法重建具有形状边缘的图像。 为了解决这个问题,本文提出了一种由MSE和加权均值梯度误差组成的混合梯度误差,并将其应用于修改后的U-net网络作为损失函数。 修改后的U-net移除了每个块中的所有批量归一化层和一个卷积层。 该操作减少了参数数量,从而加速了重建。 与现有的图像超分辨率算法相比,所提出的重建方法具有更好的性能和时间消耗。 实验表明,带有混合梯度损失的修改后的U-net网络架构在三个图像数据集:SET14、BSD300、ICDAR2003上取得了高水平的结果。 代码可在网上获得。
Subjects: Image and Video Processing (eess.IV) ; Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:1911.09428 [eess.IV]
  (or arXiv:1911.09428v1 [eess.IV] for this version)
  https://doi.org/10.48550/arXiv.1911.09428
arXiv-issued DOI via DataCite

Submission history

From: Zhengyang Lu [view email]
[v1] Thu, 21 Nov 2019 12:02:38 UTC (5,386 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
view license
Current browse context:
eess.IV
< prev   |   next >
new | recent | 2019-11
Change to browse by:
cs
cs.CV
eess

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号