Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2002.03531

Help | Advanced Search

Computer Science > Artificial Intelligence

arXiv:2002.03531 (cs)
[Submitted on 10 Feb 2020 (v1) , last revised 2 Sep 2025 (this version, v2)]

Title: A Novel Kuhnian Ontology for Epistemic Classification of STM Scholarly Articles

Title: 一种新的库恩主义本体论用于STM学术文章的认知分类

Authors:Khalid M. Saqr
Abstract: Despite rapid gains in scale, research evaluation still relies on opaque, lagging proxies. To serve the scientific community, we pursue transparency: reproducible, auditable epistemic classification useful for funding and policy. Here we formalize KGX3 as a scenario-based model for mapping Kuhnian stages from research papers, prove determinism of the classification pipeline, and define the epistemic manifold that yields paradigm maps. We report validation across recent corpora, operational complexity at global scale, and governance that preserves interpretability while protecting core IP. The system delivers early, actionable signals of drift, crisis, and shift unavailable to citation metrics or citations-anchored NLP. KGX3 is the latest iteration of a deterministic epistemic engine developed since 2019, originating as Soph.io (2020), advanced as iKuhn (2024), and field-tested through Preprint Watch in 2025.
Abstract: 尽管规模迅速扩大,研究评估仍然依赖于不透明且滞后的代理指标。 为了服务于科学界,我们追求透明度:可重复、可审计的认识论分类,有助于资金和政策制定。 在此,我们将KGX3形式化为一种基于场景的模型,用于从研究论文中映射库恩阶段,证明分类流程的确定性,并定义产生范式图的认识论流形。 我们报告了在最近语料库中的验证,全球规模的操作复杂性,以及在保持可解释性的同时保护核心知识产权的治理机制。 该系统提供了早期、可操作的漂移、危机和转变信号,这些信号是引用指标或以引用为基础的NLP无法提供的。 KGX3是自2019年以来开发的确定性认识论引擎的最新迭代,起源于Soph.io(2020),发展为iKuhn(2024),并在2025年的Preprint Watch中进行了实地测试。
Subjects: Artificial Intelligence (cs.AI) ; Computation and Language (cs.CL)
MSC classes: 68T50, 68T30, 68Q70, 91D30, 62P25
ACM classes: H.3.1; I.2.4; I.2.7
Cite as: arXiv:2002.03531 [cs.AI]
  (or arXiv:2002.03531v2 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2002.03531
arXiv-issued DOI via DataCite

Submission history

From: Khalid Saqr [view email]
[v1] Mon, 10 Feb 2020 04:00:07 UTC (1,300 KB)
[v2] Tue, 2 Sep 2025 13:46:02 UTC (26 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.AI
< prev   |   next >
new | recent | 2020-02
Change to browse by:
cs
cs.CL

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号