Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > math > arXiv:2504.21787

Help | Advanced Search

Mathematics > Statistics Theory

arXiv:2504.21787 (math)
[Submitted on 30 Apr 2025 (v1) , last revised 22 May 2025 (this version, v2)]

Title: Estimation of discrete distributions in relative entropy, and the deviations of the missing mass

Title: 离散分布相对熵的估计及缺失质量偏差

Authors:Jaouad Mourtada
Abstract: We study the problem of estimating a distribution over a finite alphabet from an i.i.d. sample, with accuracy measured in relative entropy (Kullback-Leibler divergence). While optimal expected risk bounds are known, high-probability guarantees remain less well-understood. First, we analyze the classical Laplace (add-one) estimator, obtaining matching upper and lower bounds on its performance and showing its optimality among confidence-independent estimators. We then characterize the minimax-optimal high-probability risk, which is attained via a simple confidence-dependent smoothing technique. Interestingly, the optimal non-asymptotic risk exhibits an additional logarithmic factor over the ideal asymptotic risk. Next, motivated by scenarios where the alphabet exceeds the sample size, we investigate methods that adapt to the sparsity of the distribution at hand. We introduce an estimator using data-dependent smoothing, for which we establish a high-probability risk bound depending on two effective sparsity parameters. As part of the analysis, we also derive a sharp high-probability upper bound on the missing mass.
Abstract: 我们研究了从独立同分布样本估计有限字母表上分布的问题,并以相对熵(Kullback-Leibler散度)来衡量精度。尽管已知最优的期望风险界,高概率保证仍然较少为人所理解。首先,我们分析了经典的拉普拉斯(加一)估计器,得到了其性能的匹配上下界,并证明了它在无信心依赖估计器中的最优性。然后,我们刻画了最小最大高概率风险,通过简单的信心依赖平滑技术实现。有趣的是,最优的非渐近风险在理想渐近风险之上表现出额外的对数因子。接下来,受字母表超过样本量场景的启发,我们研究了适应分布稀疏性的方法。我们引入了一种使用数据依赖平滑的估计器,并建立了依赖于两个有效稀疏参数的高概率风险界。作为分析的一部分,我们还推导出了缺失质量的尖锐高概率上界。
Comments: 54 pages
Subjects: Statistics Theory (math.ST) ; Information Theory (cs.IT); Machine Learning (cs.LG); Machine Learning (stat.ML)
Cite as: arXiv:2504.21787 [math.ST]
  (or arXiv:2504.21787v2 [math.ST] for this version)
  https://doi.org/10.48550/arXiv.2504.21787
arXiv-issued DOI via DataCite

Submission history

From: Jaouad Mourtada [view email]
[v1] Wed, 30 Apr 2025 16:47:10 UTC (56 KB)
[v2] Thu, 22 May 2025 12:10:28 UTC (56 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
view license
Current browse context:
math.ST
< prev   |   next >
new | recent | 2025-04
Change to browse by:
cs
cs.IT
cs.LG
math
math.IT
stat
stat.ML
stat.TH

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号