Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2408.07243

Help | Advanced Search

Computer Science > Computer Vision and Pattern Recognition

arXiv:2408.07243 (cs)
[Submitted on 14 Aug 2024 (v1) , last revised 17 Sep 2025 (this version, v2)]

Title: Leveraging Perceptual Scores for Dataset Pruning in Computer Vision Tasks

Title: 利用感知评分进行计算机视觉任务中的数据集剪枝

Authors:Raghavendra Singh
Abstract: In this paper we propose a score of an image to use for coreset selection in image classification and semantic segmentation tasks. The score is the entropy of an image as approximated by the bits-per-pixel of its compressed version. Thus the score is intrinsic to an image and does not require supervision or training. It is very simple to compute and readily available as all images are stored in a compressed format. The motivation behind our choice of score is that most other scores proposed in literature are expensive to compute. More importantly, we want a score that captures the perceptual complexity of an image. Entropy is one such measure, images with clutter tend to have a higher entropy. However sampling only low entropy iconic images, for example, leads to biased learning and an overall decrease in test performance with current deep learning models. To mitigate the bias we use a graph based method that increases the spatial diversity of the selected samples. We show that this simple score yields good results, particularly for semantic segmentation tasks.
Abstract: 在本文中,我们提出了一种图像的得分,用于图像分类和语义分割任务中的核心集选择。该得分是通过其压缩版本的每像素位数近似得到的图像熵。因此,该得分是图像固有的,不需要监督或训练。它非常容易计算,并且易于获取,因为所有图像都以压缩格式存储。我们选择该得分的动机是,文献中提出的大多数其他得分计算成本较高。更重要的是,我们希望一个能够捕捉图像感知复杂度的得分。熵就是这样一个度量,杂乱的图像往往具有更高的熵。然而,仅采样低熵的标志性图像会导致学习偏差,并且当前深度学习模型的测试性能总体下降。为了减轻偏差,我们使用了一种基于图的方法,以增加所选样本的空间多样性。我们证明了这种简单的得分效果良好,尤其是在语义分割任务中。
Comments: NON ARCHIVAL PRESENTATION 1st workshop on Dataset Distillation CVPR 2024
Subjects: Computer Vision and Pattern Recognition (cs.CV) ; Information Theory (cs.IT)
Cite as: arXiv:2408.07243 [cs.CV]
  (or arXiv:2408.07243v2 [cs.CV] for this version)
  https://doi.org/10.48550/arXiv.2408.07243
arXiv-issued DOI via DataCite

Submission history

From: Raghavendra Singh [view email]
[v1] Wed, 14 Aug 2024 00:55:52 UTC (1,338 KB)
[v2] Wed, 17 Sep 2025 08:54:25 UTC (1,214 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
license icon view license
Current browse context:
cs.CV
< prev   |   next >
new | recent | 2024-08
Change to browse by:
cs
cs.IT
math
math.IT

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号