Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.17977

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2506.17977 (cs)
[Submitted on 22 Jun 2025 ]

Title: SliceGX: Layer-wise GNN Explanation with Model-slicing

Title: SliceGX:基于模型切片的分层GNN解释

Authors:Tingting Zhu, Tingyang Chen, Yinghui Wu, Arijit Khan, Xiangyu Ke
Abstract: Ensuring the trustworthiness of graph neural networks (GNNs) as black-box models requires effective explanation methods. Existing GNN explanations typically apply input perturbations to identify subgraphs that are responsible for the occurrence of the final output of GNNs. However, such approaches lack finer-grained, layer-wise analysis of how intermediate representations contribute to the final result, capabilities that are crucial for model diagnosis and architecture optimization. This paper introduces SliceGX, a novel GNN explanation approach that generates explanations at specific GNN layers in a progressive manner. Given a GNN M, a set of selected intermediate layers, and a target layer, SliceGX automatically segments M into layer blocks ("model slice") and discovers high-quality explanatory subgraphs in each layer block that clarifies the occurrence of output of M at the targeted layer. Although finding such layer-wise explanations is computationally challenging, we develop efficient algorithms and optimization techniques that incrementally generate and maintain these subgraphs with provable approximation guarantees. Additionally, SliceGX offers a SPARQL-like query interface, providing declarative access and search capacities for the generated explanations. Through experiments on large real-world graphs and representative GNN architectures, we verify the effectiveness and efficiency of SliceGX, and illustrate its practical utility in supporting model debugging.
Abstract: 确保图神经网络(GNNs)作为黑盒模型的可信性需要有效的解释方法。 现有的GNN解释方法通常通过对输入进行扰动来识别导致GNN最终输出出现的子图。 然而,此类方法缺乏对中间表示如何贡献于最终结果的更细粒度、逐层分析,而这种能力对于模型诊断和架构优化至关重要。 本文介绍了SliceGX,一种新颖的GNN解释方法,该方法以渐进的方式在特定GNN层生成解释。 给定一个GNN M、一组选定的中间层和一个目标层,SliceGX会自动将M分割为层块(“模型切片”),并在每个层块中发现高质量的解释性子图,以阐明M在目标层的输出出现情况。 尽管寻找这样的逐层解释在计算上具有挑战性,但我们开发了高效的算法和优化技术,能够逐步生成并维护这些子图,并提供可证明的近似保证。 此外,SliceGX提供了一个类似SPARQL的查询接口,为生成的解释提供了声明式的访问和搜索能力。 通过在大规模真实世界图和代表性GNN架构上的实验,我们验证了SliceGX的有效性和效率,并展示了其在支持模型调试方面的实际用途。
Subjects: Machine Learning (cs.LG) ; Databases (cs.DB)
Cite as: arXiv:2506.17977 [cs.LG]
  (or arXiv:2506.17977v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2506.17977
arXiv-issued DOI via DataCite

Submission history

From: Tingyang Chen [view email]
[v1] Sun, 22 Jun 2025 10:28:46 UTC (1,340 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.DB

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号