Skip to main content
Registration is now open! Early-bird pricing available through May 5, 2026. Register now

All Accepted Papers

Do Agents Need to Plan Step-by-Step? Rethinking Planning Horizon in Data-Centric Tool Calling

Naoki Otani (Megagon Labs), Nikita Bhutani (Megagon Labs), Hannah Kim (Megagon Labs), Dan Zhang (Megagon Labs), Estevam Hruschka (Megagon Labs)

Architectural Patterns & Composition

Abstract

Explicit planning is a critical capability for LLM-based agents solving complex data-centric tasks, which require precise tool calling to interact with external data. Existing strategies fall into two paradigms based on their planning horizon: (1) full-horizon (FH), which generates a complete plan before execution, and (2) single-step horizon (SH), which interleaves action with incremental reasoning and observation. While the research community has largely converged on SH planning as the de facto standard under the assumption that \textit{eager} execution monitoring is necessary for adaptability, we challenge this default adoption. We isolate planning horizon as the key architectural feature and systematically analyze the effects of topological complexity and tool robustness on both paradigms. Our experiments across Knowledge Base Question Answering (KBQA) and Multi-hop QA validate our hypothesis: FH planning with \textit{lazy} replanning achieves performance parity with SH across varying depths, breadths, and robustness levels, while reducing token consumption by $2-3\times$. These findings suggest that for well-defined data-centric tasks, eager step-wise monitoring is often unnecessary, and full-horizon planning with on-demand replanning offers a more efficient default.

ACM CAIS 2026 Sponsors