Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > stat > arXiv:1911.00936

Help | Advanced Search

Statistics > Machine Learning

arXiv:1911.00936 (stat)
[Submitted on 3 Nov 2019 ]

Title: Enhancing VAEs for Collaborative Filtering: Flexible Priors & Gating Mechanisms

Title: 增强变分自编码器用于协同过滤:灵活先验与门控机制

Authors:Daeryong Kim, Bongwon Suh
Abstract: Neural network based models for collaborative filtering have started to gain attention recently. One branch of research is based on using deep generative models to model user preferences where variational autoencoders were shown to produce state-of-the-art results. However, there are some potentially problematic characteristics of the current variational autoencoder for CF. The first is the too simplistic prior that VAEs incorporate for learning the latent representations of user preference. The other is the model's inability to learn deeper representations with more than one hidden layer for each network. Our goal is to incorporate appropriate techniques to mitigate the aforementioned problems of variational autoencoder CF and further improve the recommendation performance. Our work is the first to apply flexible priors to collaborative filtering and show that simple priors (in original VAEs) may be too restrictive to fully model user preferences and setting a more flexible prior gives significant gains. We experiment with the VampPrior, originally proposed for image generation, to examine the effect of flexible priors in CF. We also show that VampPriors coupled with gating mechanisms outperform SOTA results including the Variational Autoencoder for Collaborative Filtering by meaningful margins on 2 popular benchmark datasets (MovieLens & Netflix).
Abstract: 基于神经网络的协同过滤模型最近开始引起关注。其中一个研究分支是基于使用深度生成模型来建模用户偏好,变分自编码器被证明能产生最先进的结果。 然而,当前变分自编码器在协同过滤(CF)方面存在一些潜在的问题特征。首先是变分自编码器用于学习用户偏好的潜在表示所采用的过于简单的先验。另一个问题是该模型无法通过每个网络中的多于一个隐藏层来学习更深的表示。 我们的目标是结合适当的技术来缓解变分自编码器协同过滤的上述问题,并进一步提高推荐性能。我们的工作首次将灵活的先验应用于协同过滤,并表明简单的先验(如原始变分自编码器中的先验)可能过于限制性,无法充分建模用户偏好,而设置更灵活的先验可以带来显著的收益。我们尝试了最初为图像生成提出的VampPrior,以检验灵活先验在协同过滤中的效果。我们还展示了VampPrior与门控机制相结合,在两个流行的基准数据集(MovieLens和Netflix)上以有意义的差距超过了包括协同过滤变分自编码器在内的最新技术成果。
Subjects: Machine Learning (stat.ML) ; Information Retrieval (cs.IR); Machine Learning (cs.LG)
Cite as: arXiv:1911.00936 [stat.ML]
  (or arXiv:1911.00936v1 [stat.ML] for this version)
  https://doi.org/10.48550/arXiv.1911.00936
arXiv-issued DOI via DataCite
Journal reference: In Thirteenth ACM Conference on Recommender Systems (RecSys '19), September 16-20, 2019, Copenhagen, Denmark. ACM, New York, NY, USA, 5 pages
Related DOI: https://doi.org/10.1145/3298689.3347015
DOI(s) linking to related resources

Submission history

From: Daeryong Kim [view email]
[v1] Sun, 3 Nov 2019 17:42:57 UTC (287 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2019-11
Change to browse by:
cs
cs.IR
stat
stat.ML

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号