Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.05814

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2506.05814 (cs)
[Submitted on 6 Jun 2025 ]

Title: Positional Encoding meets Persistent Homology on Graphs

Title: 位置编码在图上的持久同调相遇

Authors:Yogesh Verma, Amauri H. Souza, Vikas Garg
Abstract: The local inductive bias of message-passing graph neural networks (GNNs) hampers their ability to exploit key structural information (e.g., connectivity and cycles). Positional encoding (PE) and Persistent Homology (PH) have emerged as two promising approaches to mitigate this issue. PE schemes endow GNNs with location-aware features, while PH methods enhance GNNs with multiresolution topological features. However, a rigorous theoretical characterization of the relative merits and shortcomings of PE and PH has remained elusive. We bridge this gap by establishing that neither paradigm is more expressive than the other, providing novel constructions where one approach fails but the other succeeds. Our insights inform the design of a novel learnable method, PiPE (Persistence-informed Positional Encoding), which is provably more expressive than both PH and PE. PiPE demonstrates strong performance across a variety of tasks (e.g., molecule property prediction, graph classification, and out-of-distribution generalization), thereby advancing the frontiers of graph representation learning. Code is available at https://github.com/Aalto-QuML/PIPE.
Abstract: 图神经网络(GNNs)的消息传递机制在局部归纳偏置方面存在局限性,这阻碍了它们利用关键结构信息(如连通性和循环)的能力。位置编码(PE)和持久同调(PH)作为两种有前景的方法,旨在缓解这一问题。PE 方法赋予 GNN 位置感知特征,而 PH 方法则通过多分辨率拓扑特征增强 GNN。然而,对 PE 和 PH 的相对优势与不足的严格理论刻画仍未明朗。 我们通过证明这两种范式在表达能力上并无高下之分来填补这一空白,并提供了新的构造实例,展示了一种方法失败但另一种成功的情况。我们的洞见指导了新型可学习方法 PiPE(Persistence-informed Positional Encoding)的设计,该方法在理论上被证明比 PH 和 PE 更具表达能力。PiPE 在多种任务(如分子属性预测、图分类和分布外泛化)中表现出色,从而推动了图表示学习的前沿发展。 代码可在 https://github.com/Aalto-QuML/PIPE 获取。
Comments: Accepted at ICML 2025
Subjects: Machine Learning (cs.LG) ; Artificial Intelligence (cs.AI); Emerging Technologies (cs.ET); Social and Information Networks (cs.SI)
Cite as: arXiv:2506.05814 [cs.LG]
  (or arXiv:2506.05814v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2506.05814
arXiv-issued DOI via DataCite

Submission history

From: Yogesh Verma [view email]
[v1] Fri, 6 Jun 2025 07:22:17 UTC (1,476 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.AI
cs.ET
cs.SI

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号