Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2106.00322

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2106.00322 (cs)
[Submitted on 1 Jun 2021 ]

Title: Sequential Domain Adaptation by Synthesizing Distributionally Robust Experts

Title: 通过合成分布稳健专家的顺序域适应

Authors:Bahar Taskesen, Man-Chung Yue, Jose Blanchet, Daniel Kuhn, Viet Anh Nguyen
Abstract: Least squares estimators, when trained on a few target domain samples, may predict poorly. Supervised domain adaptation aims to improve the predictive accuracy by exploiting additional labeled training samples from a source distribution that is close to the target distribution. Given available data, we investigate novel strategies to synthesize a family of least squares estimator experts that are robust with regard to moment conditions. When these moment conditions are specified using Kullback-Leibler or Wasserstein-type divergences, we can find the robust estimators efficiently using convex optimization. We use the Bernstein online aggregation algorithm on the proposed family of robust experts to generate predictions for the sequential stream of target test samples. Numerical experiments on real data show that the robust strategies may outperform non-robust interpolations of the empirical least squares estimators.
Abstract: 最小二乘估计量在仅基于少量目标域样本训练时,可能预测效果不佳。监督领域适应旨在通过利用来自接近目标分布的源分布的额外标记训练样本来提高预测准确性。在给定的数据条件下,我们研究了新的策略来合成一组对矩条件具有鲁棒性的最小二乘估计专家。当这些矩条件使用 Kullback-Leibler 或 Wasserstein 类型的散度指定时,我们可以高效地通过凸优化找到鲁棒估计器。我们使用伯恩斯坦在线聚合算法在所提出的鲁棒专家族上生成对目标测试样本序列流的预测。实证数值实验表明,鲁棒策略可能优于经验最小二乘估计器的非鲁棒插值方法。
Subjects: Machine Learning (cs.LG) ; Optimization and Control (math.OC); Machine Learning (stat.ML)
Cite as: arXiv:2106.00322 [cs.LG]
  (or arXiv:2106.00322v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2106.00322
arXiv-issued DOI via DataCite

Submission history

From: Bahar Taskesen [view email]
[v1] Tue, 1 Jun 2021 08:51:55 UTC (1,041 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • TeX Source
view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2021-06
Change to browse by:
cs
math
math.OC
stat
stat.ML

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号