Computer Science > Human-Computer Interaction
[Submitted on 14 Sep 2025
]
Title: Detecting AI-Assisted Cheating in Online Exams through Behavior Analytics
Title: 通过行为分析检测在线考试中的AI辅助作弊
Abstract: AI-assisted cheating has emerged as a significant threat in the context of online exams. Advanced browser extensions now enable large language models (LLMs) to answer questions presented in online exams within seconds, thereby compromising the security of these assessments. In this study, the behaviors of students (N = 52) on an online exam platform during a proctored, face-to-face exam were analyzed using clustering methods, with the aim of identifying groups of students exhibiting suspicious behavior potentially associated with cheating. Additionally, students in different clusters were compared in terms of their exam scores. Suspicious exam behaviors in this study were defined as selecting text within the question area, right-clicking, and losing focus on the exam page. The total frequency of these behaviors performed by each student during the exam was extracted, and k-Means clustering was employed for the analysis. The findings revealed that students were classified into six clusters based on their suspicious behaviors. It was found that students in four of the six clusters, representing approximately 33% of the total sample, exhibited suspicious behaviors at varying levels. When the exam scores of these students were compared, it was observed that those who engaged in suspicious behaviors scored, on average, 30-40 points higher than those who did not. Although further research is necessary to validate these findings, this preliminary study provides significant insights into the detection of AI-assisted cheating in online exams using behavior analytics.
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender
(What is IArxiv?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.