Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2310.12059

Help | Advanced Search

Computer Science > Computation and Language

arXiv:2310.12059 (cs)
[Submitted on 18 Oct 2023 (v1) , last revised 13 May 2025 (this version, v5)]

Title: Evaluating the Symbol Binding Ability of Large Language Models for Multiple-Choice Questions in Vietnamese General Education

Title: 评估大型语言模型在越南通用教育多项选择题中的符号绑定能力

Authors:Duc-Vu Nguyen, Quoc-Nam Nguyen
Abstract: In this paper, we evaluate the ability of large language models (LLMs) to perform multiple choice symbol binding (MCSB) for multiple choice question answering (MCQA) tasks in zero-shot, one-shot, and few-shot settings. We focus on Vietnamese, with fewer challenging MCQA datasets than in English. The two existing datasets, ViMMRC 1.0 and ViMMRC 2.0, focus on literature. Recent research in Vietnamese natural language processing (NLP) has focused on the Vietnamese National High School Graduation Examination (VNHSGE) from 2019 to 2023 to evaluate ChatGPT. However, these studies have mainly focused on how ChatGPT solves the VNHSGE step by step. We aim to create a novel and high-quality dataset by providing structured guidelines for typing LaTeX formulas for mathematics, physics, chemistry, and biology. This dataset can be used to evaluate the MCSB ability of LLMs and smaller language models (LMs) because it is typed in a strict LaTeX style. We focus on predicting the character (A, B, C, or D) that is the most likely answer to a question, given the context of the question. Our evaluation of six well-known LLMs, namely BLOOMZ-7.1B-MT, LLaMA-2-7B, LLaMA-2-70B, GPT-3, GPT-3.5, and GPT-4.0, on the ViMMRC 1.0 and ViMMRC 2.0 benchmarks and our proposed dataset shows promising results on the MCSB ability of LLMs for Vietnamese. The dataset is available for research purposes only.
Abstract: 在本文中,我们评估了大型语言模型(LLMs)在零样本、单样本和少量样本设置下执行多项选择符号绑定(MCSB)以进行多项选择问题回答(MCQA)任务的能力。 我们专注于越南语,其具有挑战性的MCQA数据集比英语少。 现有的两个数据集,ViMMRC 1.0和ViMMRC 2.0,侧重于文学。 最近的越南语自然语言处理(NLP)研究集中在2019年至2023年的越南国家高中毕业考试(VNHSGE)上,以评估ChatGPT。 然而,这些研究主要关注ChatGPT如何逐步解决VNHSGE。 我们旨在通过为数学、物理、化学和生物的LaTeX公式输入提供结构化指南来创建一个新颖且高质量的数据集。 由于该数据集采用严格的LaTeX格式输入,因此可以用于评估LLMs和较小的语言模型(LMs)的MCSB能力。 我们专注于在给定问题上下文的情况下预测最可能是问题答案的字符(A、B、C或D)。 我们在ViMMRC 1.0和ViMMRC 2.0基准以及我们提出的数据集上对六种著名的LLMs,即BLOOMZ-7.1B-MT、LLaMA-2-7B、LLaMA-2-70B、GPT-3、GPT-3.5和GPT-4.0进行了评估,结果显示LLMs在越南语的MCSB能力方面具有良好的前景。 该数据集仅限于研究目的。
Comments: Accepted at SoICT 2023
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2310.12059 [cs.CL]
  (or arXiv:2310.12059v5 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2310.12059
arXiv-issued DOI via DataCite

Submission history

From: Quoc-Nam Nguyen [view email]
[v1] Wed, 18 Oct 2023 15:48:07 UTC (253 KB)
[v2] Mon, 23 Oct 2023 03:33:18 UTC (253 KB)
[v3] Thu, 16 Nov 2023 14:04:15 UTC (459 KB)
[v4] Tue, 29 Apr 2025 05:18:29 UTC (308 KB)
[v5] Tue, 13 May 2025 04:23:12 UTC (308 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.CL
< prev   |   next >
new | recent | 2023-10
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号