Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.19245

Help | Advanced Search

Computer Science > Computers and Society

arXiv:2510.19245 (cs)
[Submitted on 22 Oct 2025 ]

Title: See, Think, Act: Online Shopper Behavior Simulation with VLM Agents

Title: 看,想,做:使用VLM代理的在线购物者行为模拟

Authors:Yimeng Zhang, Jiri Gesi, Ran Xue, Tian Wang, Ziyi Wang, Yuxuan Lu, Sinong Zhan, Huimin Zeng, Qingjun Cui, Yufan Guo, Jing Huang, Mubarak Shah, Dakuo Wang
Abstract: LLMs have recently demonstrated strong potential in simulating online shopper behavior. Prior work has improved action prediction by applying SFT on action traces with LLM-generated rationales, and by leveraging RL to further enhance reasoning capabilities. Despite these advances, current approaches rely on text-based inputs and overlook the essential role of visual perception in shaping human decision-making during web GUI interactions. In this paper, we investigate the integration of visual information, specifically webpage screenshots, into behavior simulation via VLMs, leveraging OPeRA dataset. By grounding agent decision-making in both textual and visual modalities, we aim to narrow the gap between synthetic agents and real-world users, thereby enabling more cognitively aligned simulations of online shopping behavior. Specifically, we employ SFT for joint action prediction and rationale generation, conditioning on the full interaction context, which comprises action history, past HTML observations, and the current webpage screenshot. To further enhance reasoning capabilities, we integrate RL with a hierarchical reward structure, scaled by a difficulty-aware factor that prioritizes challenging decision points. Empirically, our studies show that incorporating visual grounding yields substantial gains: the combination of text and image inputs improves exact match accuracy by more than 6% over text-only inputs. These results indicate that multi-modal grounding not only boosts predictive accuracy but also enhances simulation fidelity in visually complex environments, which captures nuances of human attention and decision-making that text-only agents often miss. Finally, we revisit the design space of behavior simulation frameworks, identify key methodological limitations, and propose future research directions toward building efficient and effective human behavior simulators.
Abstract: 大型语言模型最近在模拟在线购物者行为方面表现出强大的潜力。 先前的工作通过在动作轨迹上应用监督微调(SFT)并结合大语言模型生成的推理过程来改进动作预测,并通过强化学习进一步增强推理能力。 尽管取得了这些进展,当前的方法依赖于基于文本的输入,并忽略了视觉感知在网页图形用户界面交互中塑造人类决策的关键作用。 在本文中,我们研究了将视觉信息,特别是网页截图,整合到行为模拟中,通过视觉语言模型(VLMs)并利用OPeRA数据集。 通过将代理决策建立在文本和视觉模态的基础上,我们的目标是缩小合成代理与真实用户之间的差距,从而实现更符合认知的在线购物行为模拟。 具体而言,我们采用监督微调(SFT)进行联合动作预测和推理生成,条件是完整的交互上下文,包括动作历史、过去的HTML观察结果和当前的网页截图。 为了进一步增强推理能力,我们将强化学习与分层奖励结构相结合,并通过一个难度感知因子进行扩展,该因子优先考虑具有挑战性的决策点。 实证研究表明,引入视觉基础带来了显著的提升:文本和图像输入的组合比仅使用文本输入的准确率提高了超过6%。 这些结果表明,多模态基础不仅提高了预测准确性,还增强了在视觉复杂环境中的模拟保真度,这捕捉到了人类注意力和决策中的细微差别,而仅使用文本的代理通常会忽略这些细节。 最后,我们重新审视了行为模拟框架的设计空间,识别了关键的方法论限制,并提出了未来的研究方向,以构建高效且有效的仿人行为模拟器。
Subjects: Computers and Society (cs.CY) ; Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC); Machine Learning (cs.LG); Multimedia (cs.MM)
Cite as: arXiv:2510.19245 [cs.CY]
  (or arXiv:2510.19245v1 [cs.CY] for this version)
  https://doi.org/10.48550/arXiv.2510.19245
arXiv-issued DOI via DataCite

Submission history

From: Yimeng Zhang [view email]
[v1] Wed, 22 Oct 2025 05:07:14 UTC (1,489 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.CY
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
cs.AI
cs.HC
cs.LG
cs.MM

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号