Generating Expressive and Customizable Evals for Timeseries Data Analysis Agents with AgentFuel
Aadyaa Maddi (Carnegie Mellon University), Prakhar Naval (Rockfish), Deepti Mande (Rockfish), Muckai Girish (Rockfish), Shane Duan (Rockfish), Vyas Sekar (CMU/Rockfish)
Evaluation & Benchmarking
Abstract
Across many domains (e.g., IoT, observability, telecommunications, cybersecurity), there is an emerging adoption of conversational data analysis agents that enable users to “talk to your data” to extract insights. Such data analysis agents operate on timeseries data mod- els; e.g., measurements from sensors or events monitoring user clicks and actions in product analytics. We evaluate popular data analysis agents (both open-source and proprietary) on domain- specific data and query patterns of interest and find that they fail on domain-relevant queries. We observe two key expressivity gaps in existing evals: domain-customized datasets and domain-specific query patterns. To enable practitioners in such domains to generate customized and expressive evals for such timeseries data agents, we present AgentFuel. AgentFuel helps domain experts quickly create customized evals to perform end-to-end functional tests. We show that AgentFuel’s benchmarks exposes key directions for improvement in existing data agent frameworks. We also show anecdotal evidence that using AgentFuel can improve the performance of agents (e.g., using GEPA).