Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2407.03653

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

arXiv:2407.03653 (cs)
[Submitted on 4 Jul 2024 (v1) , last revised 16 May 2025 (this version, v5)]

Title: reBEN: Refined BigEarthNet Dataset for Remote Sensing Image Analysis

Title: 再处理BigEarthNet数据集:用于遥感图像分析的精炼BigEarthNet数据集

Authors:Kai Norman Clasen, Leonard Hackel, Tom Burgert, Gencer Sumbul, Begüm Demir, Volker Markl
Abstract: This paper presents refined BigEarthNet (reBEN) that is a large-scale, multi-modal remote sensing dataset constructed to support deep learning (DL) studies for remote sensing image analysis. The reBEN dataset consists of 549,488 pairs of Sentinel-1 and Sentinel-2 image patches. To construct reBEN, we initially consider the Sentinel-1 and Sentinel-2 tiles used to construct the BigEarthNet dataset and then divide them into patches of size 1200 m x 1200 m. We apply atmospheric correction to the Sentinel-2 patches using the latest version of the sen2cor tool, resulting in higher-quality patches compared to those present in BigEarthNet. Each patch is then associated with a pixel-level reference map and scene-level multi-labels. This makes reBEN suitable for pixel- and scene-based learning tasks. The labels are derived from the most recent CORINE Land Cover (CLC) map of 2018 by utilizing the 19-class nomenclature as in BigEarthNet. The use of the most recent CLC map results in overcoming the label noise present in BigEarthNet. Furthermore, we introduce a new geographical-based split assignment algorithm that significantly reduces the spatial correlation among the train, validation, and test sets with respect to those present in BigEarthNet. This increases the reliability of the evaluation of DL models. To minimize the DL model training time, we introduce software tools that convert the reBEN dataset into a DL-optimized data format. In our experiments, we show the potential of reBEN for multi-modal multi-label image classification problems by considering several state-of-the-art DL models. The pre-trained model weights, associated code, and complete dataset are available at https://bigearth.net.
Abstract: 本文介绍了改进的 BigEarthNet(reBEN),这是一个大规模的多模态遥感数据集,旨在支持遥感图像分析的深度学习(DL)研究。 reBEN 数据集由 549,488 对 Sentinel-1 和 Sentinel-2 图像块组成。 为了构建 reBEN,我们最初考虑用于构建 BigEarthNet 数据集的 Sentinel-1 和 Sentinel-2 图块,然后将其划分为大小为 1200 米 x 1200 米的块。 我们使用 sen2cor 工具的最新版本对 Sentinel-2 块进行大气校正,与 BigEarthNet 中的块相比,这产生了更高质量的块。 每个块随后与像素级参考图和场景级多标签相关联。 这使得 reBEN 适合于像素级和场景级的学习任务。 这些标签通过利用 2018 年最新的 CORINE 土地覆盖(CLC)地图,并采用与 BigEarthNet 相同的 19 类命名法来获取。 使用最新的 CLC 地图有助于克服 BigEarthNet 中存在的标签噪声。 此外,我们引入了一种新的基于地理的分割分配算法,显著减少了训练、验证和测试集之间的空间相关性,与 BigEarthNet 中的情况相比。 这提高了深度学习模型评估的可靠性。 为了尽量减少深度学习模型的训练时间,我们引入了软件工具,将 reBEN 数据集转换为深度学习优化的数据格式。 在我们的实验中,通过考虑几种最先进的深度学习模型,展示了 reBEN 在多模态多标签图像分类问题中的潜力。 预训练模型权重、相关代码和完整数据集可在 https://bigearth.net 获取。
Comments: Accepted at IEEE International Geoscience and Remote Sensing Symposium (IGARSS) 2025. Our code is available at https://github.com/rsim-tu-berlin/bigearthnet-pipeline
Subjects: Computer Vision and Pattern Recognition (cs.CV) ; Image and Video Processing (eess.IV)
Cite as: arXiv:2407.03653 [cs.CV]
  (or arXiv:2407.03653v5 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2407.03653
arXiv-issued DOI via DataCite

Submission history

From: Kai Norman Clasen [view email]
[v1] Thu, 4 Jul 2024 05:48:28 UTC (391 KB)
[v2] Mon, 29 Jul 2024 12:53:20 UTC (391 KB)
[v3] Thu, 16 Jan 2025 08:55:49 UTC (406 KB)
[v4] Wed, 16 Apr 2025 13:44:46 UTC (406 KB)
[v5] Fri, 16 May 2025 15:49:37 UTC (408 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.CV
< prev   |   next >
new | recent | 2024-07
Change to browse by:
cs
eess
eess.IV

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号