Exploring and Developing a Pre-Model Safeguard with Draft Models
Hongyu Cai (Purdue University), Arjun Arunasalam (Florida International University), Yiming Liang (Purdue University), Antonio Bianchi (Purdue University), Z. Berkay Celik (Purdue University)
Security & Privacy
Abstract
Large Language Model (LLM) providers have implemented alignment techniques to ensure that LLMs adhere to human values. Despite these efforts, such alignment methods remain vulnerable to jailbreak attacks, which aim to elicit unaligned responses from LLMs. To mitigate this, pre-model and post-model guards are employed. Pre-model guards audit the safety of prompts before invoking the target model. However, relying solely on the prompt often leads to high false-negative rates (i.e., jailbreak attacks are undetected). Post-model guards address this issue by auditing both the user prompt and the target model’s response. However, they incur a high computational cost, including increased token usage and processing time, as they operate after target model inference. In this paper, we introduce a safeguard design that leverages the transferability of jailbreak attacks to enforce prompt safety before target model inference. We first conduct a systematic study of jailbreak transferability, particularly from LLMs to small language models (SLMs). We extensively evaluate transferability using three representative jailbreak generation systems, six SLMs, and three LLMs. Through these experiments, we identify key factors influencing transferability. Building on these insights, we observe that responses from smaller draft models reflect the safety implications of those from large target models, i.e., given a jailbreak prompt constructed for an LLM, another SLM is likely to be triggered to generate an unaligned response. Based on this observation, our safeguard design leverages speculative inference with SLMs to generate a set of draft responses. It then inputs the original prompt and these drafts into existing guards to predict their safety. We demonstrate that this design reduces the false negative rate of pre-model guards and offers a low prompt-to-response time alternative to post-model guards. Compared to pre-model guards, our safeguard design reduces false negative rate of jailbreak prompts by an average of 32.4% (σ = 32.92%). Relative to post-model guards, our safeguard design reduces false negative rate by an average of 17.38% (σ = 44.65%) and reduces prompt-to-response time by 97.07% (Llama-3-70B-Instruct-AWQ). For benign prompts, our safeguard design achieves the same accuracy of benign prompts of 98% with both pre- and post-guards with a minimal latency increase of 0.59%. Notice: This paper contains examples of harmful language.