Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2412.02588

Help | Advanced Search

Computer Science > Information Retrieval

arXiv:2412.02588 (cs)
[Submitted on 3 Dec 2024 ]

Title: Explainable CTR Prediction via LLM Reasoning

Title: 通过大语言模型推理的可解释点击率预测

Authors:Xiaohan Yu, Li Zhang, Chong Chen
Abstract: Recommendation Systems have become integral to modern user experiences, but lack transparency in their decision-making processes. Existing explainable recommendation methods are hindered by reliance on a post-hoc paradigm, wherein explanation generators are trained independently of the underlying recommender models. This paradigm necessitates substantial human effort in data construction and raises concerns about explanation reliability. In this paper, we present ExpCTR, a novel framework that integrates large language model based explanation generation directly into the CTR prediction process. Inspired by recent advances in reinforcement learning, we employ two carefully designed reward mechanisms, LC alignment, which ensures explanations reflect user intentions, and IC alignment, which maintains consistency with traditional ID-based CTR models. Our approach incorporates an efficient training paradigm with LoRA and a three-stage iterative process. ExpCTR circumvents the need for extensive explanation datasets while fostering synergy between CTR prediction and explanation generation. Experimental results demonstrate that ExpCTR significantly enhances both recommendation accuracy and interpretability across three real-world datasets.
Abstract: 推荐系统已成为现代用户体验的重要组成部分,但其决策过程缺乏透明度。 现有的可解释推荐方法受到事后范式的限制,其中解释生成器与底层推荐模型独立训练。 这种范式需要大量的人工数据构建工作,并引发了对解释可靠性的问题。 在本文中,我们提出了ExpCTR,一种新颖的框架,将基于大型语言模型的解释生成直接整合到点击率(CTR)预测过程中。 受强化学习最新进展的启发,我们采用了两种精心设计的奖励机制,LC对齐,确保解释反映用户意图,以及IC对齐,保持与传统基于ID的CTR模型的一致性。 我们的方法结合了高效的训练范式和LoRA以及三阶段迭代过程。 ExpCTR避免了对大量解释数据集的需求,同时促进了CTR预测和解释生成之间的协同作用。 实验结果表明,ExpCTR在三个真实世界数据集上显著提高了推荐准确性和可解释性。
Comments: WSDM 2025
Subjects: Information Retrieval (cs.IR) ; Artificial Intelligence (cs.AI)
Cite as: arXiv:2412.02588 [cs.IR]
  (or arXiv:2412.02588v1 [cs.IR] for this version)
  https://doi.org/10.48550/arXiv.2412.02588
arXiv-issued DOI via DataCite

Submission history

From: Xiaohan Yu [view email]
[v1] Tue, 3 Dec 2024 17:17:27 UTC (1,405 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.AI
< prev   |   next >
new | recent | 2024-12
Change to browse by:
cs
cs.IR

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号