Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2306.09519v2

Help | Advanced Search

Computer Science > Computation and Language

arXiv:2306.09519v2 (cs)
[Submitted on 15 Jun 2023 (v1) , last revised 4 Jul 2025 (this version, v2)]

Title: Relation-Aware Network with Attention-Based Loss for Few-Shot Knowledge Graph Completion

Title: 基于注意力损失的关系感知网络用于少样本知识图谱补全

Authors:Qiao Qiao, Yuepei Li, Kang Zhou, Qi Li
Abstract: Few-shot knowledge graph completion (FKGC) task aims to predict unseen facts of a relation with few-shot reference entity pairs. Current approaches randomly select one negative sample for each reference entity pair to minimize a margin-based ranking loss, which easily leads to a zero-loss problem if the negative sample is far away from the positive sample and then out of the margin. Moreover, the entity should have a different representation under a different context. To tackle these issues, we propose a novel Relation-Aware Network with Attention-Based Loss (RANA) framework. Specifically, to better utilize the plentiful negative samples and alleviate the zero-loss issue, we strategically select relevant negative samples and design an attention-based loss function to further differentiate the importance of each negative sample. The intuition is that negative samples more similar to positive samples will contribute more to the model. Further, we design a dynamic relation-aware entity encoder for learning a context-dependent entity representation. Experiments demonstrate that RANA outperforms the state-of-the-art models on two benchmark datasets.
Abstract: 少样本知识图谱补全(FKGC)任务旨在通过少量样本参考实体对预测关系的未见事实。 当前方法为每个参考实体对随机选择一个负样本以最小化基于间隔的排序损失,如果负样本远离正样本并超出间隔,则容易导致零损失问题。 此外,实体在不同上下文中应具有不同的表示。 为解决这些问题,我们提出了一种新颖的关系感知网络与基于注意力的损失(RANA)框架。 具体而言,为了更好地利用丰富的负样本并缓解零损失问题,我们有策略地选择相关的负样本,并设计一个基于注意力的损失函数以进一步区分每个负样本的重要性。 其直觉是,与正样本更相似的负样本会对模型产生更大的贡献。 此外,我们设计了一个动态的关系感知实体编码器,用于学习上下文依赖的实体表示。 实验表明,RANA在两个基准数据集上优于最先进的模型。
Comments: conference PAKDD 2023
Subjects: Computation and Language (cs.CL) ; Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Cite as: arXiv:2306.09519 [cs.CL]
  (or arXiv:2306.09519v2 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2306.09519
arXiv-issued DOI via DataCite

Submission history

From: Qiao Qiao [view email]
[v1] Thu, 15 Jun 2023 21:41:43 UTC (796 KB)
[v2] Fri, 4 Jul 2025 22:52:34 UTC (797 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.CL
< prev   |   next >
new | recent | 2023-06
Change to browse by:
cs
cs.AI
cs.LG

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号