Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.09506

Help | Advanced Search

Computer Science > Multimedia

arXiv:2506.09506 (cs)
[Submitted on 11 Jun 2025 ]

Title: Dynamic Sub-region Search in Homogeneous Collections Using CLIP

Title: 使用CLIP在同质集合中进行动态子区域搜索

Authors:Bastian Jäckl, Vojtěch Kloda, Daniel A. Keim, Jakub Lokoč
Abstract: Querying with text-image-based search engines in highly homogeneous domain-specific image collections is challenging for users, as they often struggle to provide descriptive text queries. For example, in an underwater domain, users can usually characterize entities only with abstract labels, such as corals and fish, which leads to low recall rates. Our work investigates whether recall can be improved by supplementing text queries with position information. Specifically, we explore dynamic image partitioning approaches that divide candidates into semantically meaningful regions of interest. Instead of querying entire images, users can specify regions they recognize. This enables the use of position constraints while preserving the semantic capabilities of multimodal models. We introduce and evaluate strategies for integrating position constraints into semantic search models and compare them against static partitioning approaches. Our evaluation highlights both the potential and the limitations of sub-region-based search methods using dynamic partitioning. Dynamic search models achieve up to double the retrieval performance compared to static partitioning approaches but are highly sensitive to perturbations in the specified query positions.
Abstract: 在高度同质的领域特定图像集合中使用基于文本-图像的搜索引擎对用户来说具有挑战性,因为他们常常难以提供描述性的文本查询。 例如,在水下领域,用户通常只能用抽象标签(如珊瑚和鱼类)来描述实体,这导致召回率较低。 我们的工作研究了通过补充位置信息是否可以提高召回率。 具体而言,我们探索了动态图像分区方法,这些方法将候选项划分为语义上有意义的兴趣区域。 用户无需查询整个图像,而是可以指定他们识别出的区域。 这使得在保留多模态模型语义能力的同时能够使用位置约束。 我们介绍了将位置约束集成到语义搜索模型中的策略,并将其与静态分区方法进行了比较。 我们的评估突显了基于子区域搜索方法(采用动态分区)的潜力和局限性。 与静态分区方法相比,动态搜索模型的检索性能最高可提升一倍,但对指定查询位置的扰动非常敏感。
Comments: 18 pages, 4 figures, 5 tables
Subjects: Multimedia (cs.MM)
MSC classes: 68U10
ACM classes: H.3.3; I.4.10; H.2.8
Cite as: arXiv:2506.09506 [cs.MM]
  (or arXiv:2506.09506v1 [cs.MM] for this version)
  https://doi.org/10.48550/arXiv.2506.09506
arXiv-issued DOI via DataCite

Submission history

From: Bastian Jäckl [view email]
[v1] Wed, 11 Jun 2025 08:25:22 UTC (3,959 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
license icon view license
Current browse context:
cs.MM
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号