Skip to main content
Registration is now open! Early-bird pricing available through May 5, 2026. Register now

All Accepted Demos

Skilled AI Agents for Embedded and IoT Systems Development

Yiming Li (Duke University), Yuhan Cheng (Duke University), Mingchen Ma (Duke University), Yihang Zou (Duke University), Ningyuan Yang (Duke University), Wei Cheng (Duke University), Hai "Helen" Li (Duke University), Yiran Chen (Duke University), Tingjun Chen (Duke University)

Evaluation & Benchmarking Architectural Patterns & Composition

Summary

A skills-based agentic framework for hardware-in-the-loop embedded/IoT development with a benchmark spanning 3 platforms, 23 peripherals, and 42 tasks validated on real hardware.

Description

Large language models (LLMs) and agentic systems have shown promise for automated software development, but applying them to hardware-in-the-loop (HIL) embedded and Internet-of-Things (IoT) systems remains challenging due to the tight coupling between software logic and physical hardware behavior. Code that compiles successfully may still fail when deployed on real devices because of timing constraints, peripheral initialization requirements, or hardware-specific behaviors. To address this challenge, we introduce a skills-based agentic framework for HIL embedded development together with IoT-SkillsBench, a benchmark designed to systematically evaluate AI agents in real embedded programming environments. IoT-SkillsBench spans three representative embedded platforms, 23 peripherals, and 42 tasks across three difficulty levels, where each task is evaluated under three agent configurations (no-skills, LLM-generated skills, and human-expert skills) and validated through real hardware execution. Across 378 hardware-validated experiments, we show that concise human-expert skills with structured expert knowledge enable near-perfect success rates across platforms, achieving nearly perfect pass rates.

ACM CAIS 2026 Sponsors