Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2402.05979v1

Help | Advanced Search

Computer Science > Software Engineering

arXiv:2402.05979v1 (cs)
[Submitted on 7 Feb 2024 ]

Title: On the Standardization of Behavioral Use Clauses and Their Adoption for Responsible Licensing of AI

Title: 关于行为使用条款的标准化及其在人工智能负责任许可中的采用

Authors:Daniel McDuff, Tim Korjakow, Scott Cambo, Jesse Josua Benjamin, Jenny Lee, Yacine Jernite, Carlos Muñoz Ferrandis, Aaron Gokaslan, Alek Tarkowski, Joseph Lindley, A. Feder Cooper, Danish Contractor
Abstract: Growing concerns over negligent or malicious uses of AI have increased the appetite for tools that help manage the risks of the technology. In 2018, licenses with behaviorial-use clauses (commonly referred to as Responsible AI Licenses) were proposed to give developers a framework for releasing AI assets while specifying their users to mitigate negative applications. As of the end of 2023, on the order of 40,000 software and model repositories have adopted responsible AI licenses licenses. Notable models licensed with behavioral use clauses include BLOOM (language) and LLaMA2 (language), Stable Diffusion (image), and GRID (robotics). This paper explores why and how these licenses have been adopted, and why and how they have been adapted to fit particular use cases. We use a mixed-methods methodology of qualitative interviews, clustering of license clauses, and quantitative analysis of license adoption. Based on this evidence we take the position that responsible AI licenses need standardization to avoid confusing users or diluting their impact. At the same time, customization of behavioral restrictions is also appropriate in some contexts (e.g., medical domains). We advocate for ``standardized customization'' that can meet users' needs and can be supported via tooling.
Abstract: 对人工智能被疏忽或恶意使用的日益关注,增加了对有助于管理该技术风险的工具的需求。 2018年,提出了包含行为使用条款的许可证(通常称为负责任的人工智能许可证),以给开发者提供一个释放人工智能资产的框架,同时指定其用户以减轻负面应用。 截至2023年底,大约有40,000个软件和模型仓库采用了负责任的人工智能许可证。 以行为使用条款许可的著名模型包括BLOOM(语言)和LLaMA2(语言)、Stable Diffusion(图像)以及GRID(机器人)。 本文探讨了这些许可证为何以及如何被采用,以及为何以及如何被调整以适应特定的使用案例。 我们采用了一种混合方法的方法论,包括定性访谈、许可证条款聚类和许可证采用的定量分析。 基于这些证据,我们认为负责任的人工智能许可证需要标准化,以避免让用户困惑或削弱其影响。 同时,在某些情况下(例如医疗领域),对行为限制进行定制也是合适的。 我们倡导“标准化定制”,这可以满足用户的需求,并可通过工具支持。
Subjects: Software Engineering (cs.SE) ; Artificial Intelligence (cs.AI)
Cite as: arXiv:2402.05979 [cs.SE]
  (or arXiv:2402.05979v1 [cs.SE] for this version)
  https://doi.org/10.48550/arXiv.2402.05979
arXiv-issued DOI via DataCite

Submission history

From: Daniel McDuff [view email]
[v1] Wed, 7 Feb 2024 22:29:42 UTC (635 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.SE
< prev   |   next >
new | recent | 2024-02
Change to browse by:
cs
cs.AI

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号