Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2507.20446

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2507.20446 (cs)
[Submitted on 28 Jul 2025 (v1) , last revised 7 Aug 2025 (this version, v2)]

Title: BOASF: A Unified Framework for Speeding up Automatic Machine Learning via Adaptive Successive Filtering

Title: BOASF:通过自适应连续过滤加速自动机器学习的统一框架

Authors:Guanghui Zhu, Xin Fang, Feng Cheng, Lei Wang, Wenzhong Chen, Chunfeng Yuan, Yihua Huang
Abstract: Machine learning has been making great success in many application areas. However, for the non-expert practitioners, it is always very challenging to address a machine learning task successfully and efficiently. Finding the optimal machine learning model or the hyperparameter combination set from a large number of possible alternatives usually requires considerable expert knowledge and experience. To tackle this problem, we propose a combined Bayesian Optimization and Adaptive Successive Filtering algorithm (BOASF) under a unified multi-armed bandit framework to automate the model selection or the hyperparameter optimization. Specifically, BOASF consists of multiple evaluation rounds in each of which we select promising configurations for each arm using the Bayesian optimization. Then, ASF can early discard the poor-performed arms adaptively using a Gaussian UCB-based probabilistic model. Furthermore, a Softmax model is employed to adaptively allocate available resources for each promising arm that advances to the next round. The arm with a higher probability of advancing will be allocated more resources. Experimental results show that BOASF is effective for speeding up the model selection and hyperparameter optimization processes while achieving robust and better prediction performance than the existing state-of-the-art automatic machine learning methods. Moreover, BOASF achieves better anytime performance under various time budgets.
Abstract: 机器学习在许多应用领域取得了巨大成功。 然而,对于非专家实践者来说,成功且高效地解决机器学习任务始终是非常具有挑战性的。 从大量可能的替代方案中找到最优的机器学习模型或超参数组合集通常需要大量的专业知识和经验。 为了解决这个问题,我们提出了一种在统一多臂老虎机框架下的贝叶斯优化和自适应连续过滤算法(BOASF),以自动化模型选择或超参数优化。 具体来说,BOASF在每个评估轮次中使用贝叶斯优化为每个臂选择有前景的配置。 然后, ASF 可以使用基于高斯UCB的概率模型自适应地提前丢弃表现不佳的臂。 此外,采用 Softmax 模型来为进入下一轮的每个有前景的臂自适应地分配可用资源。 具有更高推进概率的臂将被分配更多的资源。 实验结果表明,BOASF在加速模型选择和超参数优化过程方面是有效的,并且在实现稳健且优于现有最先进的自动机器学习方法的预测性能方面表现更好。 此外,BOASF在各种时间预算下实现了更好的随时性能。
Subjects: Machine Learning (cs.LG)
Cite as: arXiv:2507.20446 [cs.LG]
  (or arXiv:2507.20446v2 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2507.20446
arXiv-issued DOI via DataCite

Submission history

From: Xin Fang [view email]
[v1] Mon, 28 Jul 2025 00:30:07 UTC (929 KB)
[v2] Thu, 7 Aug 2025 17:12:27 UTC (927 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2025-07
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号