Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > physics > arXiv:1911.13029v2

Help | Advanced Search

Physics > Computational Physics

arXiv:1911.13029v2 (physics)
[Submitted on 29 Nov 2019 (v1) , last revised 2 Dec 2019 (this version, v2)]

Title: Progressive-Growing of Generative Adversarial Networks for Metasurface Optimization

Title: 生成对抗网络在超表面优化中的渐进增长

Authors:Fufang Wen, Jiaqi Jiang, Jonathan A. Fan
Abstract: Generative adversarial networks, which can generate metasurfaces based on a training set of high performance device layouts, have the potential to significantly reduce the computational cost of the metasurface design process. However, basic GAN architectures are unable to fully capture the detailed features of topologically complex metasurfaces, and generated devices therefore require additional computationally-expensive design refinement. In this Letter, we show that GANs can better learn spatially fine features from high-resolution training data by progressively growing its network architecture and training set. Our results indicate that with this training methodology, the best generated devices have performances that compare well with the best devices produced by gradient-based topology optimization, thereby eliminating the need for additional design refinement. We envision that this network training method can generalize to other physical systems where device performance is strongly correlated with fine geometric structuring.
Abstract: 生成对抗网络可以根据高性能器件布局的训练集生成超表面,有潜力显著降低超表面设计过程的计算成本。 然而,基本的GAN架构无法完全捕捉拓扑复杂超表面的详细特征,因此生成的器件需要额外的计算成本高昂的设计优化。 在本文中,我们表明通过逐步扩展其网络架构和训练集,GAN可以从高分辨率训练数据中更好地学习空间精细特征。 我们的结果表明,通过这种训练方法,生成的最佳器件性能可以与基于梯度的拓扑优化生成的最佳器件相媲美,从而消除了额外设计优化的需要。 我们设想这种网络训练方法可以推广到其他物理系统,在这些系统中,器件性能与精细几何结构密切相关。
Subjects: Computational Physics (physics.comp-ph) ; Machine Learning (cs.LG); Image and Video Processing (eess.IV); Applied Physics (physics.app-ph)
Cite as: arXiv:1911.13029 [physics.comp-ph]
  (or arXiv:1911.13029v2 [physics.comp-ph] for this version)
  https://doi.org/10.48550/arXiv.1911.13029
arXiv-issued DOI via DataCite

Submission history

From: Jiaqi Jiang [view email]
[v1] Fri, 29 Nov 2019 10:05:54 UTC (4,459 KB)
[v2] Mon, 2 Dec 2019 07:35:41 UTC (4,325 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
  • Other Formats
view license
Current browse context:
physics.comp-ph
< prev   |   next >
new | recent | 2019-11
Change to browse by:
cs
cs.LG
eess
eess.IV
physics
physics.app-ph

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号