Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.04095

Help | Advanced Search

Computer Science > Information Theory

arXiv:2510.04095 (cs)
[Submitted on 5 Oct 2025 ]

Title: Volume-Based Lower Bounds to the Capacity of the Gaussian Channel Under Pointwise Additive Input Constraints

Title: 基于体积的高斯信道在逐点加性输入约束下的容量下界

Authors:Neri Merhav, Shlomo Shamai (Shitz)
Abstract: We present a family of relatively simple and unified lower bounds on the capacity of the Gaussian channel under a set of pointwise additive input constraints. Specifically, the admissible channel input vectors $\bx = (x_1, \ldots, x_n)$ must satisfy $k$ additive cost constraints of the form $\sum_{i=1}^n \phi_j(x_i) \le n \Gamma_j$, $j = 1,2,\ldots,k$, which are enforced pointwise for every $\bx$, rather than merely in expectation. More generally, we also consider cost functions that depend on a sliding window of fixed length $m$, namely, $\sum_{i=m}^n \phi_j(x_i, x_{i-1}, \ldots, x_{i-m+1}) \le n \Gamma_j$, $j = 1,2,\ldots,k$, a formulation that naturally accommodates correlation constraints as well as a broad range of other constraints of practical relevance. We propose two classes of lower bounds, derived by two methodologies that both rely on the exact evaluation of the volume exponent associated with the set of input vectors satisfying the given constraints. This evaluation exploits extensions of the method of types to continuous alphabets, the saddle-point method of integration, and basic tools from large deviations theory. The first class of bounds is obtained via the entropy power inequality (EPI), and therefore applies exclusively to continuous-valued inputs. The second class, by contrast, is more general, and it applies to discrete input alphabets as well. It is based on a direct manipulation of mutual information, and it yields stronger and tighter bounds, though at the cost of greater technical complexity. Numerical examples illustrating both types of bounds are provided, and several extensions and refinements are also discussed.
Abstract: 我们提出了一类相对简单且统一的下界,用于高斯信道在一组点对点加法输入约束下的容量。 具体来说,允许的信道输入向量$\bx = (x_1, \ldots, x_n)$必须满足$k$个形式为$\sum_{i=1}^n \phi_j(x_i) \le n \Gamma_j$,$j = 1,2,\ldots,k$的加法代价约束,这些约束对于每个$\bx$都是逐点强制执行的,而不仅仅是在期望意义上。 更一般地,我们还考虑依赖于固定长度滑动窗口$m$的成本函数,即$\sum_{i=m}^n \phi_j(x_i, x_{i-1}, \ldots, x_{i-m+1}) \le n \Gamma_j$,$j = 1,2,\ldots,k$,这种表述自然地适应了相关性约束以及各种其他实际相关的约束。 我们提出了两类下界,这两类下界通过两种方法推导得出,这两种方法都依赖于对满足给定约束的输入向量集的相关体积指数的精确评估。 这种评估利用了类型方法在连续字母表上的扩展、积分的鞍点方法以及大偏差理论的基本工具。 第一类界限是通过熵功率不等式 (EPI) 得到的,因此仅适用于连续值输入。 相比之下,第二类界限更为通用,也适用于离散输入字母表。 它基于对互信息的直接操作,并且给出了更强和更紧的界限,尽管代价是更大的技术复杂性。 提供了说明这两种界限的数值例子,并讨论了几种扩展和改进。
Comments: 37 pages, 4 figures, submitted for publication
Subjects: Information Theory (cs.IT)
Cite as: arXiv:2510.04095 [cs.IT]
  (or arXiv:2510.04095v1 [cs.IT] for this version)
  https://doi.org/10.48550/arXiv.2510.04095
arXiv-issued DOI via DataCite

Submission history

From: Neri Merhav [view email]
[v1] Sun, 5 Oct 2025 08:34:08 UTC (79 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.IT
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
math
math.IT

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号