Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2403.16986

Help | Advanced Search

Computer Science > Networking and Internet Architecture

arXiv:2403.16986 (cs)
[Submitted on 25 Mar 2024 (v1) , last revised 9 Apr 2025 (this version, v3)]

Title: Dynamic Relative Representations for Goal-Oriented Semantic Communications

Title: 动态相对表示在目标导向语义通信中的应用

Authors:Simone Fiorellino, Claudio Battiloro, Emilio Calvanese Strinati, Paolo Di Lorenzo
Abstract: In future 6G wireless networks, semantic and effectiveness aspects of communications will play a fundamental role, incorporating meaning and relevance into transmissions. However, obstacles arise when devices employ diverse languages, logic, or internal representations, leading to semantic mismatches that might jeopardize understanding. In latent space communication, this challenge manifests as misalignment within high-dimensional representations where deep neural networks encode data. This paper presents a novel framework for goal-oriented semantic communication, leveraging relative representations to mitigate semantic mismatches via latent space alignment. We propose a dynamic optimization strategy that adapts relative representations, communication parameters, and computation resources for energy-efficient, low-latency, goal-oriented semantic communications. Numerical results demonstrate our methodology's effectiveness in mitigating mismatches among devices, while optimizing energy consumption, delay, and effectiveness.
Abstract: 在未来6G无线网络中,通信的意义和有效性方面将发挥根本性作用,将意义和相关性融入传输。 然而,当设备使用不同的语言、逻辑或内部表示时,障碍就会出现,导致语义不匹配,可能危及理解。 在潜在空间通信中,这一挑战表现为高维表示中的不对齐,在此深神经网络编码数据。 本文提出了一种面向目标的语义通信的新框架,利用相对表示通过潜在空间对齐来缓解语义不匹配。 我们提出了一种动态优化策略,适应相对表示、通信参数和计算资源,以实现能量高效、低延迟、面向目标的语义通信。 数值结果证明了我们的方法在缓解设备间不匹配方面的有效性,同时优化了能量消耗、延迟和有效性。
Subjects: Networking and Internet Architecture (cs.NI) ; Information Theory (cs.IT); Machine Learning (cs.LG)
Cite as: arXiv:2403.16986 [cs.NI]
  (or arXiv:2403.16986v3 [cs.NI] for this version)
  https://doi.org/10.48550/arXiv.2403.16986
arXiv-issued DOI via DataCite
Related DOI: https://doi.org/10.23919/EUSIPCO63174.2024.10715102
DOI(s) linking to related resources

Submission history

From: Simone Fiorellino [view email]
[v1] Mon, 25 Mar 2024 17:48:06 UTC (398 KB)
[v2] Sun, 30 Jun 2024 10:03:09 UTC (400 KB)
[v3] Wed, 9 Apr 2025 09:41:40 UTC (400 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.NI
< prev   |   next >
new | recent | 2024-03
Change to browse by:
cs
cs.IT
cs.LG
math
math.IT

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号