Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2501.02531v1

Help | Advanced Search

Computer Science > Computers and Society

arXiv:2501.02531v1 (cs)
[Submitted on 5 Jan 2025 (this version) , latest version 27 Aug 2025 (v3) ]

Title: Towards New Benchmark for AI Alignment & Sentiment Analysis in Socially Important Issues: A Comparative Study of Human and LLMs in the Context of AGI

Title: 面向社会重要问题中人工智能对齐与情感分析的新基准:AGI背景下人类与LLMs的比较研究

Authors:Ljubisa Bojic, Dylan Seychell, Milan Cabarkapa
Abstract: With the expansion of neural networks, such as large language models, humanity is exponentially heading towards superintelligence. As various AI systems are increasingly integrated into the fabric of societies-through recommending values, devising creative solutions, and making decisions-it becomes critical to assess how these AI systems impact humans in the long run. This research aims to contribute towards establishing a benchmark for evaluating the sentiment of various Large Language Models in socially importan issues. The methodology adopted was a Likert scale survey. Seven LLMs, including GPT-4 and Bard, were analyzed and compared against sentiment data from three independent human sample populations. Temporal variations in sentiment were also evaluated over three consecutive days. The results highlighted a diversity in sentiment scores among LLMs, ranging from 3.32 to 4.12 out of 5. GPT-4 recorded the most positive sentiment score towards AGI, whereas Bard was leaning towards the neutral sentiment. The human samples, contrastingly, showed a lower average sentiment of 2.97. The temporal comparison revealed differences in sentiment evolution between LLMs in three days, ranging from 1.03% to 8.21%. The study's analysis outlines the prospect of potential conflicts of interest and bias possibilities in LLMs' sentiment formation. Results indicate that LLMs, akin to human cognitive processes, could potentially develop unique sentiments and subtly influence societies' perceptions towards various opinions formed within the LLMs.
Abstract: 随着神经网络的扩展,如大型语言模型,人类正以指数级的速度迈向超级智能。 随着各种人工智能系统越来越多地融入社会结构——通过推荐价值观、制定创造性解决方案和做出决策——评估这些人工智能系统在长期内对人类的影响变得至关重要。 本研究旨在为评估大型语言模型在社会重要问题上的情感提供一个基准。 采用的方法是一种李克特量表调查。 分析并比较了七种大型语言模型,包括GPT-4和Bard,以及来自三个独立人类样本群体的情感数据。 还评估了三天内情感的时空变化。 结果突出显示了大型语言模型之间情感评分的多样性,范围从5分中的3.32到4.12。 GPT-4在AGI方面记录了最积极的情感评分,而Bard则倾向于中性情感。 相比之下,人类样本的平均情感较低,为2.97。 时间比较显示,在三天内大型语言模型的情感演变存在差异,范围从1.03%到8.21%。 该研究的分析概述了大型语言模型情感形成中潜在的利益冲突和偏见可能性。 结果表明,大型语言模型类似于人类认知过程,可能会发展出独特的情感,并微妙地影响社会对大型语言模型内形成的各类观点的感知。
Comments: 20 pages, 1 figure
Subjects: Computers and Society (cs.CY) ; Computation and Language (cs.CL)
Cite as: arXiv:2501.02531 [cs.CY]
  (or arXiv:2501.02531v1 [cs.CY] for this version)
  https://doi.org/10.48550/arXiv.2501.02531
arXiv-issued DOI via DataCite

Submission history

From: Ljubisa Bojic [view email]
[v1] Sun, 5 Jan 2025 13:18:13 UTC (423 KB)
[v2] Mon, 25 Aug 2025 15:23:08 UTC (500 KB)
[v3] Wed, 27 Aug 2025 13:49:46 UTC (718 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
license icon view license
Current browse context:
cs.CY
< prev   |   next >
new | recent | 2025-01
Change to browse by:
cs
cs.CL

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号