Skip to main content
CenXiv.org
This website is in trial operation, support us!
We gratefully acknowledge support from all contributors.
Contribute
Donate
cenxiv logo > cs > arXiv:2506.18916

Help | Advanced Search

Computer Science > Machine Learning

arXiv:2506.18916 (cs)
[Submitted on 11 Jun 2025 ]

Title: HI-SQL: Optimizing Text-to-SQL Systems through Dynamic Hint Integration

Title: HI-SQL:通过动态提示集成优化文本到SQL系统

Authors:Ganesh Parab, Zishan Ahmad, Dagnachew Birru
Abstract: Text-to-SQL generation bridges the gap between natural language and databases, enabling users to query data without requiring SQL expertise. While large language models (LLMs) have significantly advanced the field, challenges remain in handling complex queries that involve multi-table joins, nested conditions, and intricate operations. Existing methods often rely on multi-step pipelines that incur high computational costs, increase latency, and are prone to error propagation. To address these limitations, we propose HI-SQL, a pipeline that incorporates a novel hint generation mechanism utilizing historical query logs to guide SQL generation. By analyzing prior queries, our method generates contextual hints that focus on handling the complexities of multi-table and nested operations. These hints are seamlessly integrated into the SQL generation process, eliminating the need for costly multi-step approaches and reducing reliance on human-crafted prompts. Experimental evaluations on multiple benchmark datasets demonstrate that our approach significantly improves query accuracy of LLM-generated queries while ensuring efficiency in terms of LLM calls and latency, offering a robust and practical solution for enhancing Text-to-SQL systems.
Abstract: 文本到SQL生成弥合了自然语言和数据库之间的差距,使用户无需SQL专业知识即可查询数据。 虽然大型语言模型(LLMs)在该领域取得了显著进展,但在处理涉及多表连接、嵌套条件和复杂操作的复杂查询时仍存在挑战。 现有方法通常依赖于多步骤流程,这会带来较高的计算成本,增加延迟,并容易出现错误传播。 为了解决这些限制,我们提出了HI-SQL,这是一种结合了新颖提示生成机制的流程,利用历史查询日志来指导SQL生成。 通过分析之前的查询,我们的方法生成上下文提示,专注于处理多表和嵌套操作的复杂性。 这些提示无缝集成到SQL生成过程中,消除了对昂贵的多步骤方法的依赖,并减少了对人工编写提示的依赖。 在多个基准数据集上的实验评估表明,我们的方法显著提高了LLM生成查询的查询准确性,同时在LLM调用和延迟方面确保了效率,为增强文本到SQL系统提供了一个强大且实用的解决方案。
Subjects: Machine Learning (cs.LG) ; Databases (cs.DB)
Cite as: arXiv:2506.18916 [cs.LG]
  (or arXiv:2506.18916v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2506.18916
arXiv-issued DOI via DataCite

Submission history

From: Ganesh Parab [view email]
[v1] Wed, 11 Jun 2025 12:07:55 UTC (180 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled
  • View Chinese PDF
  • View PDF
  • HTML (experimental)
  • TeX Source
  • Other Formats
view license
Current browse context:
cs.LG
< prev   |   next >
new | recent | 2025-06
Change to browse by:
cs
cs.DB

References & Citations

  • NASA ADS
  • Google Scholar
  • Semantic Scholar
a export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo

Bibliographic and Citation Tools

Bibliographic Explorer (What is the Explorer?)
Connected Papers (What is Connected Papers?)
Litmaps (What is Litmaps?)
scite Smart Citations (What are Smart Citations?)

Code, Data and Media Associated with this Article

alphaXiv (What is alphaXiv?)
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub (What is DagsHub?)
Gotit.pub (What is GotitPub?)
Hugging Face (What is Huggingface?)
Papers with Code (What is Papers with Code?)
ScienceCast (What is ScienceCast?)

Demos

Replicate (What is Replicate?)
Hugging Face Spaces (What is Spaces?)
TXYZ.AI (What is TXYZ.AI?)

Recommenders and Search Tools

Influence Flower (What are Influence Flowers?)
CORE Recommender (What is CORE?)
IArxiv Recommender (What is IArxiv?)
  • Author
  • Venue
  • Institution
  • Topic

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.

Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
  • About
  • Help
  • contact arXivClick here to contact arXiv Contact
  • subscribe to arXiv mailingsClick here to subscribe Subscribe
  • Copyright
  • Privacy Policy
  • Web Accessibility Assistance
  • arXiv Operational Status
    Get status notifications via email or slack

京ICP备2025123034号