Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.17575

Help | Advanced Search

Computer Science > Human-Computer Interaction

arXiv:2510.17575 (cs)
[Submitted on 20 Oct 2025 (v1) , last revised 21 Oct 2025 (this version, v2)]

Title: DeTAILS: Deep Thematic Analysis with Iterative LLM Support

Title: DeTAILS: 基于迭代大语言模型支持的深度主题分析

Authors:Ansh Sharma, Karen Cochrane, James R. Wallace
Abstract: Thematic analysis is widely used in qualitative research but can be difficult to scale because of its iterative, interpretive demands. We introduce DeTAILS, a toolkit that integrates large language model (LLM) assistance into a workflow inspired by Braun and Clarke's thematic analysis framework. DeTAILS supports researchers in generating and refining codes, reviewing clusters, and synthesizing themes through interactive feedback loops designed to preserve analytic agency. We evaluated the system with 18 qualitative researchers analyzing Reddit data. Quantitative results showed strong alignment between LLM-supported outputs and participants' refinements, alongside reduced workload and high perceived usefulness. Qualitatively, participants reported that DeTAILS accelerated analysis, prompted reflexive engagement with AI outputs, and fostered trust through transparency and control. We contribute: (1) an interactive human-LLM workflow for large-scale qualitative analysis, (2) empirical evidence of its feasibility and researcher experience, and (3) design implications for trustworthy AI-assisted qualitative research.
Abstract: 主题分析在定性研究中被广泛使用,但由于其迭代和解释性的要求,难以扩展。 我们介绍了DeTAILS,一个工具包,它将大型语言模型(LLM)辅助整合到受Braun和Clarke的主题分析框架启发的工作流程中。 DeTAILS通过设计用于保留分析自主性的交互反馈循环,帮助研究人员生成和精炼代码、审查聚类并综合主题。 我们通过18位分析Reddit数据的定性研究人员评估了该系统。 定量结果显示,LLM支持的输出与参与者精炼的结果高度一致,同时工作量减少且感知有用性高。 定性方面,参与者报告称,DeTAILS加快了分析进程,促使他们反思性地参与AI输出,并通过透明度和控制建立了信任。 我们贡献了:(1) 一个用于大规模定性分析的交互式人-LLM工作流程,(2) 其可行性和研究者体验的实证证据,以及(3) 可信AI辅助定性研究的设计含义。
Subjects: Human-Computer Interaction (cs.HC)
Cite as: arXiv:2510.17575 [cs.HC]
  (or arXiv:2510.17575v2 [cs.HC] for this version)
  https://doi.org/10.48550/arXiv.2510.17575
arXiv-issued DOI via DataCite

Submission history

From: James Wallace [view email]
[v1] Mon, 20 Oct 2025 14:22:57 UTC (3,356 KB)
[v2] Tue, 21 Oct 2025 13:26:59 UTC (3,356 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.HC
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号