Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2505.00630v1

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

arXiv:2505.00630v1 (cs)
[Submitted on 1 May 2025 (this version) , latest version 3 May 2025 (v2) ]

Title: Vision Mamba in Remote Sensing: A Comprehensive Survey of Techniques, Applications and Outlook

Title: 视觉蟒蛇在遥感中的应用:技术、应用及展望综述

Authors:Muyi Bao, Shuchang Lyu, Zhaoyang Xu, Huiyu Zhou, Jinchang Ren, Shiming Xiang, Xiangtai Li, Guangliang Cheng
Abstract: Deep learning has profoundly transformed remote sensing, yet prevailing architectures like Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) remain constrained by critical trade-offs: CNNs suffer from limited receptive fields, while ViTs grapple with quadratic computational complexity, hindering their scalability for high-resolution remote sensing data. State Space Models (SSMs), particularly the recently proposed Mamba architecture, have emerged as a paradigm-shifting solution, combining linear computational scaling with global context modeling. This survey presents a comprehensive review of Mamba-based methodologies in remote sensing, systematically analyzing about 120 studies to construct a holistic taxonomy of innovations and applications. Our contributions are structured across five dimensions: (i) foundational principles of vision Mamba architectures, (ii) micro-architectural advancements such as adaptive scan strategies and hybrid SSM formulations, (iii) macro-architectural integrations, including CNN-Transformer-Mamba hybrids and frequency-domain adaptations, (iv) rigorous benchmarking against state-of-the-art methods in multiple application tasks, such as object detection, semantic segmentation, change detection, etc. and (v) critical analysis of unresolved challenges with actionable future directions. By bridging the gap between SSM theory and remote sensing practice, this survey establishes Mamba as a transformative framework for remote sensing analysis. To our knowledge, this paper is the first systematic review of Mamba architectures in remote sensing. Our work provides a structured foundation for advancing research in remote sensing systems through SSM-based methods. We curate an open-source repository (https://github.com/BaoBao0926/Awesome-Mamba-in-Remote-Sensing) to foster community-driven advancements.
Abstract: 深度学习已经深刻地改变了遥感技术,但现有的架构如卷积神经网络(CNNs)和视觉变换器(ViTs)仍然受到关键权衡的限制:CNNs的接受域有限,而ViTs则面临着二次计算复杂性的问题,这阻碍了它们在高分辨率遥感数据中的可扩展性。 状态空间模型(SSMs),特别是最近提出的Mamba架构,作为一种范式转换的解决方案应运而生,它结合了线性计算扩展与全局上下文建模。 本综述全面回顾了基于Mamba的遥感方法,系统分析了大约120项研究,以构建一个整体的创新和应用分类法。 我们的贡献从五个方面展开:(i) 视觉Mamba架构的基础原理,(ii) 微架构上的改进,如自适应扫描策略和混合SSM公式, (iii) 宏观架构整合,包括CNN-Transformer-Mamba混合模型和频域适应,(iv) 在多个应用任务中与最先进的方法进行严格的基准测试,如目标检测、语义分割、变化检测等, (v) 对未解决挑战的关键分析及可行的未来方向。 通过弥合SSM理论与遥感实践之间的差距,本综述确立了Mamba作为遥感分析的变革框架。 据我们所知,本文是首篇对遥感领域Mamba架构进行系统性综述的论文。 我们的工作为通过基于SSM的方法推进遥感系统的研究提供了结构化的基础。我们维护了一个开源仓库 (https://github.com/BaoBao0926/Awesome-Mamba-in-Remote-Sensing) 以促进社区驱动的进步。
Subjects: Computer Vision and Pattern Recognition (cs.CV)
Cite as: arXiv:2505.00630 [cs.CV]
  (or arXiv:2505.00630v1 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2505.00630
arXiv-issued DOI via DataCite

Submission history

From: Muyi Bao [view email]
[v1] Thu, 1 May 2025 16:07:51 UTC (2,688 KB)
[v2] Sat, 3 May 2025 09:38:12 UTC (2,785 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • Other Formats
view license
Current browse context:
cs.CV
< prev   |   next >
new | recent | 2025-05
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号