Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2510.04167v2

Help | Advanced Search

Computer Science > Information Theory

arXiv:2510.04167v2 (cs)
[Submitted on 5 Oct 2025 (v1) , last revised 16 Oct 2025 (this version, v2)]

Title: Multiplicative Turing Ensembles, Pareto's Law, and Creativity

Title: 乘法图灵集合,帕累托定律和创造力

Authors:Alexander Kolpakov, Aidan Rocke
Abstract: We study integer-valued multiplicative dynamics driven by i.i.d. prime multipliers and connect their macroscopic statistics to universal codelengths. We introduce the Multiplicative Turing Ensemble (MTE) and show how it arises naturally - though not uniquely - from ensembles of probabilistic Turing machines. Our modeling principle is variational: taking Elias' Omega codelength as an energy and imposing maximum entropy constraints yields a canonical Gibbs prior on integers and, by restriction, on primes. Under mild tail assumptions, this prior induces exponential tails for log-multipliers (up to slowly varying corrections), which in turn generate Pareto tails for additive gaps. We also prove time-average laws for the Omega codelength along MTE trajectories. Empirically, on Debian and PyPI package size datasets, a scaled Omega prior achieves the lowest KL divergence against codelength histograms. Taken together, the theory-data comparison suggests a qualitative split: machine-adapted regimes (Gibbs-aligned, finite first moment) exhibit clean averaging behavior, whereas human-generated complexity appears to sit beyond this regime, with tails heavy enough to produce an unbounded first moment, and therefore no averaging of the same kind.
Abstract: 我们研究由独立同分布素数乘数驱动的整数值乘法动力学,并将其宏观统计特性与通用码长联系起来。我们引入了乘法图灵系综(MTE),并展示它如何自然地——尽管不唯一地——从概率图灵机的系综中出现。我们的建模原则是变分的:以埃利斯的Omega码长作为能量,并施加最大熵约束,可以得到整数上的规范吉布斯先验,并通过限制得到素数上的规范吉布斯先验。在温和的尾部假设下,这种先验会对数乘数产生指数尾部(加上缓慢变化的修正项),从而生成加性间隔的帕累托尾部。我们还证明了沿MTE轨迹的Omega码长的时间平均定律。经验上,在Debian和PyPI包大小数据集上,缩放后的Omega先验相对于码长直方图实现了最低的KL散度。综合来看,理论与数据的比较表明存在定性区分:机器适应的区域(吉布斯对齐、一阶矩有限)表现出清晰的平均行为,而人类生成的复杂性似乎超出这一区域,其尾部足够重以至于产生无界的首阶矩,因此不存在相同类型的整体平均。
Comments: 23 pages, 2 figures, 1 table; auxiliary code available on GitHub (https://github.com/sashakolpakov/mte-pareto/)
Subjects: Information Theory (cs.IT) ; Computational Complexity (cs.CC); Mathematical Physics (math-ph)
ACM classes: H.1.1
Cite as: arXiv:2510.04167 [cs.IT]
  (or arXiv:2510.04167v2 [cs.IT] for this version)
  https://doi.org/10.48550/arXiv.2510.04167
arXiv-issued DOI via DataCite

Submission history

From: Alexander Kolpakov [view email]
[v1] Sun, 5 Oct 2025 12:04:50 UTC (19 KB)
[v2] Thu, 16 Oct 2025 22:19:43 UTC (168 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
view license
Current browse context:
cs.IT
< prev   |   next >
new | recent | 2025-10
Change to browse by:
cs
cs.CC
math
math-ph
math.IT
math.MP

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号