Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.19002

Help | Advanced Search

Computer Science > Computer Science and Game Theory

arXiv:2510.19002 (cs)
[Submitted on 21 Oct 2025 ]

Title: Impartial Selection with Predictions

Title: 无偏选择与预测

Authors:Javier Cembrano, Felix Fischer, Max Klimm
Abstract: We study the selection of agents based on mutual nominations, a theoretical problem with many applications from committee selection to AI alignment. As agents both select and are selected, they may be incentivized to misrepresent their true opinion about the eligibility of others to influence their own chances of selection. Impartial mechanisms circumvent this issue by guaranteeing that the selection of an agent is independent of the nominations cast by that agent. Previous research has established strong bounds on the performance of impartial mechanisms, measured by their ability to approximate the number of nominations for the most highly nominated agents. We study to what extent the performance of impartial mechanisms can be improved if they are given a prediction of a set of agents receiving a maximum number of nominations. Specifically, we provide bounds on the consistency and robustness of such mechanisms, where consistency measures the performance of the mechanisms when the prediction is accurate and robustness its performance when the prediction is inaccurate. For the general setting where up to $k$ agents are to be selected and agents nominate any number of other agents, we give a mechanism with consistency $1-O\big(\frac{1}{k}\big)$ and robustness $1-\frac{1}{e}-O\big(\frac{1}{k}\big)$. For the special case of selecting a single agent based on a single nomination per agent, we prove that $1$-consistency can be achieved while guaranteeing $\frac{1}{2}$-robustness. A close comparison with previous results shows that (asymptotically) optimal consistency can be achieved with little to no sacrifice in terms of robustness.
Abstract: 我们研究基于相互提名的代理人选择,这是一个在委员会选择到人工智能对齐等多个应用中的理论问题。 由于代理人在选择和被选择中都扮演角色,他们可能有动机歪曲自己对他人资格的真实看法,以影响自己的选择机会。 公正机制通过保证代理人的选择与其本人投出的提名无关来规避这一问题。 先前的研究已经建立了公正机制性能的强界限,其性能是通过它们近似最多被提名代理人的提名数量的能力来衡量的。 我们研究如果给公正机制提供一个关于获得最多提名的代理人群的预测,这种机制的性能能在多大程度上得到改善。 具体来说,我们提供了此类机制的一致性和鲁棒性的界限,其中一致性衡量预测准确时机制的性能,而鲁棒性衡量预测不准确时的性能。 对于最多选择$k$个代理人的通用情况,以及代理人可以提名任意数量其他代理人的场景,我们给出了一种一致性为$1-O\big(\frac{1}{k}\big)$且鲁棒性为$1-\frac{1}{e}-O\big(\frac{1}{k}\big)$的机制。 对于每个代理人基于一次提名选择一个代理人的特殊情况,我们证明可以实现$1$-一致性,同时保证$\frac{1}{2}$-鲁棒性。 与之前的结果进行比较表明,(渐近地)最优的一致性可以在几乎没有牺牲鲁棒性的情况下实现。
Subjects: Computer Science and Game Theory (cs.GT) ; Machine Learning (cs.LG); Theoretical Economics (econ.TH); Optimization and Control (math.OC)
Cite as: arXiv:2510.19002 [cs.GT]
  (or arXiv:2510.19002v1 [cs.GT] for this version)
  https://doi.org/10.48550/arXiv.2510.19002
arXiv-issued DOI via DataCite

Submission history

From: Javier Cembrano [view email]
[v1] Tue, 21 Oct 2025 18:27:08 UTC (39 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
view license
Current browse context:
cs.GT
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
cs.LG
econ
econ.TH
math
math.OC

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号