Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > hep-ex > arXiv:2510.07594

Help | Advanced Search

High Energy Physics - Experiment

arXiv:2510.07594 (hep-ex)
[Submitted on 8 Oct 2025 ]

Title: Locality-Sensitive Hashing-Based Efficient Point Transformer for Charged Particle Reconstruction

Title: 基于局部敏感哈希的高效点变换器用于带电粒子重建

Authors:Shitij Govil, Jack P. Rodgers, Yuan-Tang Chou, Siqi Miao, Amit Saha, Advaith Anand, Kilian Lieret, Gage DeZoort, Mia Liu, Javier Duarte, Pan Li, Shih-Chieh Hsu
Abstract: Charged particle track reconstruction is a foundational task in collider experiments and the main computational bottleneck in particle reconstruction. Graph neural networks (GNNs) have shown strong performance for this problem, but costly graph construction, irregular computations, and random memory access patterns substantially limit their throughput. The recently proposed Hashing-based Efficient Point Transformer (HEPT) offers a theoretically guaranteed near-linear complexity for large point cloud processing via locality-sensitive hashing (LSH) in attention computations; however, its evaluations have largely focused on embedding quality, and the object condensation pipeline on which HEPT relies requires a post-hoc clustering step (e.g., DBScan) that can dominate runtime. In this work, we make two contributions. First, we present a unified, fair evaluation of physics tracking performance for HEPT and a representative GNN-based pipeline under the same dataset and metrics. Second, we introduce HEPTv2 by extending HEPT with a lightweight decoder that eliminates the clustering stage and directly predicts track assignments. This modification preserves HEPT's regular, hardware-friendly computations while enabling ultra-fast end-to-end inference. On the TrackML dataset, optimized HEPTv2 achieves approximately 28 ms per event on an A100 while maintaining competitive tracking efficiency. These results position HEPTv2 as a practical, scalable alternative to GNN-based pipelines for fast tracking.
Abstract: 带电粒子轨迹重建是对撞机实验中的基础任务,也是粒子重建中的主要计算瓶颈。图神经网络(GNNs)在该问题上表现出色,但昂贵的图构建、不规则的计算和随机的内存访问模式显著限制了它们的吞吐量。最近提出的基于哈希的高效点变换器(HEPT)通过注意力计算中的局部敏感哈希(LSH)实现了大规模点云处理的理论保证近线性复杂度;然而,其评估主要集中在嵌入质量上,而HEPT所依赖的对象凝聚流程需要一个事后聚类步骤(例如,DBScan),这可能会主导运行时间。在本工作中,我们做出了两项贡献。首先,我们在相同的数据集和指标下,对HEPT和一种代表性的基于GNN的流程进行了统一且公平的物理轨迹性能评估。其次,我们通过扩展HEPT引入了HEPTv2,其轻量解码器消除了聚类阶段,直接预测轨迹分配。这种修改保留了HEPT的规则、硬件友好的计算,同时实现了超快速的端到端推理。在TrackML数据集上,优化后的HEPTv2在A100上每个事件的处理时间约为28毫秒,同时保持了具有竞争力的轨迹重建效率。这些结果使HEPTv2成为基于GNN的流程在快速轨迹重建中的实用且可扩展的替代方案。
Comments: Accepted to NeurIPS 2025 Machine Learning and the Physical Sciences Workshop
Subjects: High Energy Physics - Experiment (hep-ex) ; Machine Learning (cs.LG)
Cite as: arXiv:2510.07594 [hep-ex]
  (or arXiv:2510.07594v1 [hep-ex] for this version)
  https://doi.org/10.48550/arXiv.2510.07594
arXiv-issued DOI via DataCite

Submission history

From: Siqi Miao [view email]
[v1] Wed, 8 Oct 2025 22:36:26 UTC (87 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
hep-ex
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
cs.LG

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号