Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2501.02531v2

Help | Advanced Search

Computer Science > Computers and Society

arXiv:2501.02531v2 (cs)
[Submitted on 5 Jan 2025 (v1) , revised 25 Aug 2025 (this version, v2) , latest version 27 Aug 2025 (v3) ]

Title: Towards New Benchmark for AI Alignment & Sentiment Analysis in Socially Important Issues: A Comparative Study of Human and LLMs in the Context of AGI

Title: 面向人工智能对齐和社会重要问题情感分析的新基准:在通用人工智能背景下的人类与大语言模型的比较研究

Authors:Ljubisa Bojic, Dylan Seychell, Milan Cabarkapa
Abstract: As general-purpose artificial intelligence systems become increasingly integrated into society and are used for information seeking, content generation, problem solving, textual analysis, coding, and running processes, it is crucial to assess their long-term impact on humans. This research explores the sentiment of large language models (LLMs) and humans toward artificial general intelligence (AGI) using a Likert-scale survey. Seven LLMs, including GPT-4 and Bard, were analyzed and compared with sentiment data from three independent human sample populations. Temporal variations in sentiment were also evaluated over three consecutive days. The results show a diversity in sentiment scores among LLMs, ranging from 3.32 to 4.12 out of 5. GPT-4 recorded the most positive sentiment toward AGI, while Bard leaned toward a neutral sentiment. In contrast, the human samples showed a lower average sentiment of 2.97. The analysis outlines potential conflicts of interest and biases in the sentiment formation of LLMs, and indicates that LLMs could subtly influence societal perceptions. To address the need for regulatory oversight and culturally grounded assessments of AI systems, we introduce the Societal AI Alignment and Sentiment Benchmark (SAAS-AI), which leverages multidimensional prompts and empirically validated societal value frameworks to evaluate language model outputs across temporal, model, and multilingual axes. This benchmark is designed to guide policymakers and AI agencies, including within frameworks such as the EU AI Act, by providing robust, actionable insights into AI alignment with human values, public sentiment, and ethical norms at both national and international levels. Future research should further refine the operationalization of the SAAS-AI benchmark and systematically evaluate its effectiveness through comprehensive empirical testing.
Abstract: 随着通用人工智能系统日益融入社会,并被用于信息检索、内容生成、问题解决、文本分析、编程和运行流程,评估它们对人类的长期影响至关重要。 这项研究使用李克特量表调查,探讨大型语言模型(LLMs)和人类对通用人工智能(AGI)的情感。 分析了包括GPT-4和Bard在内的七种LLMs,并与三个独立的人类样本群体的情感数据进行了比较。 还评估了三天连续时间内的感情变化。 结果显示,LLMs的情感得分存在多样性,范围从5分中的3.32到4.12。 GPT-4对AGI表现出最积极的情感,而Bard则倾向于中立情感。 相比之下,人类样本的平均情感较低,为2.97。 分析指出了LLMs情感形成中的潜在利益冲突和偏见,并表明LLMs可能微妙地影响社会认知。 为了满足对人工智能系统进行监管监督和文化基础评估的需求,我们引入了社会人工智能对齐与情感基准(SAAS-AI),该基准利用多维提示和实证验证的社会价值框架,从时间、模型和多语言轴线评估语言模型输出。 该基准旨在通过提供关于人工智能与人类价值观、公众情感和伦理规范在国家和国际层面的坚实、可操作的见解,指导政策制定者和人工智能机构,包括在欧盟人工智能法案等框架内。 未来的研究应进一步完善SAAS-AI基准的操作化,并通过全面的实证测试系统地评估其有效性。
Comments: 20 pages, 1 figure
Subjects: Computers and Society (cs.CY) ; Computation and Language (cs.CL)
Cite as: arXiv:2501.02531 [cs.CY]
  (or arXiv:2501.02531v2 [cs.CY] for this version)
  https://doi.org/10.48550/arXiv.2501.02531
arXiv-issued DOI via DataCite

Submission history

From: Ljubisa Bojic [view email]
[v1] Sun, 5 Jan 2025 13:18:13 UTC (423 KB)
[v2] Mon, 25 Aug 2025 15:23:08 UTC (500 KB)
[v3] Wed, 27 Aug 2025 13:49:46 UTC (718 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
license icon view license
Current browse context:
cs.CY
< prev   |   next >
new | recent | 2025-01
Change to browse by:
cs
cs.CL

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号