Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > eess > arXiv:2502.03360

Help | Advanced Search

Electrical Engineering and Systems Science > Image and Video Processing

arXiv:2502.03360 (eess)
[Submitted on 5 Feb 2025 ]

Title: A Beam's Eye View to Fluence Maps 3D Network for Ultra Fast VMAT Radiotherapy Planning

Title: 基于射束眼视图的通量图3D网络用于超快VMAT放射治疗计划

Authors:Simon Arberet, Florin C. Ghesu, Riqiang Gao, Martin Kraus, Jonathan Sackett, Esa Kuusela, Ali Kamen
Abstract: Volumetric Modulated Arc Therapy (VMAT) revolutionizes cancer treatment by precisely delivering radiation while sparing healthy tissues. Fluence maps generation, crucial in VMAT planning, traditionally involves complex and iterative, and thus time consuming processes. These fluence maps are subsequently leveraged for leaf-sequence. The deep-learning approach presented in this article aims to expedite this by directly predicting fluence maps from patient data. We developed a 3D network which we trained in a supervised way using a combination of L1 and L2 losses, and RT plans generated by Eclipse and from the REQUITE dataset, taking the RT dose map as input and the fluence maps computed from the corresponding RT plans as target. Our network predicts jointly the 180 fluence maps corresponding to the 180 control points (CP) of single arc VMAT plans. In order to help the network, we pre-process the input dose by computing the projections of the 3D dose map to the beam's eye view (BEV) of the 180 CPs, in the same coordinate system as the fluence maps. We generated over 2000 VMAT plans using Eclipse to scale up the dataset size. Additionally, we evaluated various network architectures and analyzed the impact of increasing the dataset size. We are measuring the performance in the 2D fluence maps domain using image metrics (PSNR, SSIM), as well as in the 3D dose domain using the dose-volume histogram (DVH) on a validation dataset. The network inference, which does not include the data loading and processing, is less than 20ms. Using our proposed 3D network architecture as well as increasing the dataset size using Eclipse improved the fluence map reconstruction performance by approximately 8 dB in PSNR compared to a U-Net architecture trained on the original REQUITE dataset. The resulting DVHs are very close to the one of the input target dose.
Abstract: 容积调制弧形治疗(VMAT)通过精确递送辐射同时保护健康组织来革新癌症治疗。 照射野图生成,在VMAT计划中至关重要,传统上涉及复杂且迭代的过程,因此耗时。 这些照射野图随后用于叶片序列。 本文提出的深度学习方法旨在通过直接从患者数据预测照射野图来加速这一过程。 我们开发了一个3D网络,以L1和L2损失的组合以及Eclipse生成的放疗计划和REQUITE数据集中的放疗计划为目标,使用监督方式对其进行训练,将放疗剂量图作为输入,将相应放疗计划计算出的照射野图作为目标。 我们的网络联合预测单弧VMAT计划对应的180个控制点(CP)的180个照射野图。 为了帮助网络,我们通过计算3D剂量图到180个CP的射束眼视图(BEV)的投影来预处理输入剂量,与照射野图处于相同的坐标系中。 我们使用Eclipse生成了超过2000个VMAT计划,以扩大数据集规模。 此外,我们评估了各种网络架构,并分析了增加数据集规模的影响。 我们在2D照射野图域中使用图像指标(PSNR、SSIM),以及在3D剂量域中使用验证数据集上的剂量体积直方图(DVH)来衡量性能。 网络推理(不包括数据加载和处理)少于20毫秒。 使用我们提出的3D网络架构以及使用Eclipse扩大数据集规模,相比在原始REQUITE数据集上训练的U-Net架构,在PSNR中提高了大约8 dB的照射野图重建性能。 所得的DVH非常接近输入目标剂量的DVH。
Subjects: Image and Video Processing (eess.IV) ; Artificial Intelligence (cs.AI); Medical Physics (physics.med-ph)
Cite as: arXiv:2502.03360 [eess.IV]
  (or arXiv:2502.03360v1 [eess.IV] for this version)
  https://doi.org/10.48550/arXiv.2502.03360
arXiv-issued DOI via DataCite

Submission history

From: Simon Arberet [view email]
[v1] Wed, 5 Feb 2025 16:56:17 UTC (1,809 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
view license
Current browse context:
eess.IV
< prev   |   next >
new | recent | 2025-02
Change to browse by:
cs
cs.AI
eess
physics
physics.med-ph

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号