Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.09160

Help | Advanced Search

Computer Science > Computers and Society

arXiv:2506.09160 (cs)
[Submitted on 10 Jun 2025 (v1) , last revised 24 Jun 2025 (this version, v3)]

Title: Understanding Human-AI Trust in Education

Title: 理解教育中的人机信任

Authors:Griffin Pitts, Sanaz Motamedi
Abstract: As AI chatbots become increasingly integrated in education, students are turning to these systems for guidance, feedback, and information. However, the anthropomorphic characteristics of these chatbots create ambiguity regarding whether students develop trust toward them as they would a human peer or instructor, based in interpersonal trust, or as they would any other piece of technology, based in technology trust. This ambiguity presents theoretical challenges, as interpersonal trust models may inappropriately ascribe human intentionality and morality to AI, while technology trust models were developed for non-social technologies, leaving their applicability to anthropomorphic systems unclear. To address this gap, we investigate how human-like and system-like trusting beliefs comparatively influence students' perceived enjoyment, trusting intention, behavioral intention to use, and perceived usefulness of an AI chatbot - factors associated with students' engagement and learning outcomes. Through partial least squares structural equation modeling, we found that human-like and system-like trust significantly influenced student perceptions, with varied effects. Human-like trust more strongly predicted trusting intention, while system-like trust better predicted behavioral intention and perceived usefulness. Both had similar effects on perceived enjoyment. Given the partial explanatory power of each type of trust, we propose that students develop a distinct form of trust with AI chatbots (human-AI trust) that differs from human-human and human-technology models of trust. Our findings highlight the need for new theoretical frameworks specific to human-AI trust and offer practical insights for fostering appropriately calibrated trust, which is critical for the effective adoption and pedagogical impact of AI in education.
Abstract: 随着人工智能聊天机器人在教育中越来越深入的整合,学生们开始向这些系统寻求指导、反馈和信息。 然而,这些聊天机器人的拟人化特征引发了模糊性,即学生是像对待人类同伴或教师一样基于人际信任而对它们产生信任,还是像对待其他技术一样基于技术信任而对它们产生信任。 这种模糊性带来了理论上的挑战,因为人际信任模型可能不恰当地将人类意图和道德归因于人工智能,而技术信任模型则是为非社交技术开发的,因此它们在拟人化系统中的适用性尚不明确。 为了解决这一空白,我们研究了拟人化和系统化的信任信念如何分别影响学生对人工智能聊天机器人的感知乐趣、信任意图、使用行为意图和感知有用性——这些因素与学生的参与度和学习成果相关。 通过部分最小二乘结构方程建模,我们发现拟人化和系统化的信任显著影响了学生的看法,且效果有所不同。 拟人化信任更能预测信任意图,而系统化信任则更能预测行为意图和感知有用性。 两者在感知乐趣方面有相似的影响。 鉴于每种信任类型的解释力有限,我们提出学生在与人工智能聊天机器人之间发展出一种独特的信任形式(人机信任),这种信任不同于人际信任和人机信任模型。 我们的研究结果突显了针对人机信任建立新理论框架的必要性,并为培养适当校准的信任提供了实践见解,这对于人工智能在教育中的有效采用和教学影响至关重要。
Subjects: Computers and Society (cs.CY) ; Artificial Intelligence (cs.AI); Emerging Technologies (cs.ET); Human-Computer Interaction (cs.HC)
Cite as: arXiv:2506.09160 [cs.CY]
  (or arXiv:2506.09160v3 [cs.CY] for this version)
  https://doi.org/10.48550/arXiv.2506.09160
arXiv-issued DOI via DataCite

Submission history

From: Griffin Pitts [view email]
[v1] Tue, 10 Jun 2025 18:15:40 UTC (1,011 KB)
[v2] Thu, 12 Jun 2025 07:06:57 UTC (1,014 KB)
[v3] Tue, 24 Jun 2025 05:15:49 UTC (1,032 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • Other Formats
view license
Current browse context:
cs.CY
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.AI
cs.ET
cs.HC

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号