Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > q-fin > arXiv:2506.02796

Help | Advanced Search

Quantitative Finance > Computational Finance

arXiv:2506.02796 (q-fin)
[Submitted on 3 Jun 2025 ]

Title: Deep Learning Enhanced Multivariate GARCH

Title: 深度学习增强的多元GARCH模型

Authors:Haoyuan Wang, Chen Liu, Minh-Ngoc Tran, Chao Wang
Abstract: This paper introduces a novel multivariate volatility modeling framework, named Long Short-Term Memory enhanced BEKK (LSTM-BEKK), that integrates deep learning into multivariate GARCH processes. By combining the flexibility of recurrent neural networks with the econometric structure of BEKK models, our approach is designed to better capture nonlinear, dynamic, and high-dimensional dependence structures in financial return data. The proposed model addresses key limitations of traditional multivariate GARCH-based methods, particularly in capturing persistent volatility clustering and asymmetric co-movement across assets. Leveraging the data-driven nature of LSTMs, the framework adapts effectively to time-varying market conditions, offering improved robustness and forecasting performance. Empirical results across multiple equity markets confirm that the LSTM-BEKK model achieves superior performance in terms of out-of-sample portfolio risk forecast, while maintaining the interpretability from the BEKK models. These findings highlight the potential of hybrid econometric-deep learning models in advancing financial risk management and multivariate volatility forecasting.
Abstract: 本文介绍了一种新颖的多变量波动性建模框架,称为长短期记忆增强型BEKK(LSTM-BEKK),该框架将深度学习集成到多元GARCH过程中。通过结合循环神经网络的灵活性与BEKK模型的计量经济结构,我们的方法旨在更好地捕捉金融收益率数据中的非线性、动态和高维依赖结构。所提出的模型解决了传统基于多元GARCH方法的关键局限性,特别是在捕捉持久的波动聚类和资产间的不对称联动方面。利用LSTM的数据驱动特性,该框架能够有效适应时间变化的市场条件,提供更好的稳健性和预测性能。多项股权市场的实证结果证实,LSTM-BEKK模型在样本外投资组合风险预测方面表现出色,同时保留了BEKK模型的可解释性。这些发现突显了混合计量经济学-深度学习模型在推动金融风险管理及多变量波动率预测方面的潜力。
Subjects: Computational Finance (q-fin.CP) ; Artificial Intelligence (cs.AI); Econometrics (econ.EM)
Cite as: arXiv:2506.02796 [q-fin.CP]
  (or arXiv:2506.02796v1 [q-fin.CP] for this version)
  https://doi.org/10.48550/arXiv.2506.02796
arXiv-issued DOI via DataCite

Submission history

From: Chen Liu [view email]
[v1] Tue, 3 Jun 2025 12:22:57 UTC (993 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.AI
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
econ
econ.EM
q-fin
q-fin.CP

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号