Nudging

Inference-time Alignment of LLMs via Guided Decoding


Department of Computer Science, University of California Irvine
data-overview

Nudging employs a small aligned model to generate nudging tokens to steer the large base model's output toward desired directions when the base model's uncertainty is high.


Overview

Large language models (LLMs) require alignment—such as instruction-tuning or reinforcement learning from human feedback—to effectively and safely follow user instructions. The conventional training pipelines require separate alignment tuning for every model size within each model family. This practice leads to substantial computational overhead, hindering the rapid iteration and development of new model families.

In this work, we propose nudging, a simple, plug-and-play, and training-free algorithm that aligns any base model at inference time using a small aligned model. Nudging is motivated by recent findings that alignment primarily alters the model's behavior on a small subset of stylistic tokens, such as ``Sure'' or ``Thank''. We find that base models are significantly more uncertain when generating these alignment-related tokens. Leveraging this observation, nudging employs a small aligned model to generate nudging tokens to steer the large base model's output toward desired directions when the base model's uncertainty is high.

We evaluate the effectiveness of nudging across 3 model families and 13 tasks, covering reasoning, general knowledge, instruction following, and safety benchmarks. Without any additional training, nudging a large base model with a 7x-14x smaller aligned model achieves zero-shot performance comparable to, and sometimes surpassing, that of large aligned models. For example, nudging OLMo-7b with OLMo-1b-instruct—affecting less than 9% of tokens—achieves a 10% absolute improvement on GSM8K over OLMo-7b-instruct. Unlike prior inference-time tuning methods, nudging enables off-the-shelf collaboration between model families. For instance, nudging Gemma-2-27b with Llama-2-7b-chat outperforms Llama-2-70b-chat on various tasks. Overall, this work introduces a simple yet powerful approach to token-level model collaboration, offering a modular solution to LLM alignment.


Two Key Insights into Alignment

Previous work finds that the token distributions of base models shift significantly after alignment only on a small set of output positions. By identifying (1) where the base and aligned model would disagree (alignment-related positions) and (2) what the aligned model would generate for these positions (alignment tokens), we can insert these tokens during decoding to nudge a base model to behave like an aligned model. We study these two questions: where to nudge and what to nudge. Specifically, we find:

  1. Base models are significantly more uncertain at alignment-related positions.
  2. Aligned models of different sizes within the same family tend to agree on alignment-related positions.


Setup

Findings


Experiments

To demonstrate the effectiveness of nudging, we evaluate it across three different model families: Llama-2, Gemma-2, and OLMo. We consider 13 tasks, covering reasoning, general knowledge, instruction following, and safety benchmarks. We find that

  • Nudging a large base model with a 7x-14x smaller aligned model achieves zero-shot performance comparable to, and sometimes surpassing, that of large aligned models.
  • Nudging is particularly effective on math and symbolic reasoning tasks.
  • Nudging works effectively for models from different families.

Model GSM SVP MA MM Arc CS ST date SP CF LLC Avg.
Llama 2
70b 10.0 11.7 4.4 26.6 78.3 42.2 62.7 44.7 42.1 47.7 1.3 33.8
7b-chat 25.5 43.3 62.8 40.9 54.1 52.2 50.4 33.9 51.7 45.0 7.3 42.5
70b-chat 48.5 64.0 63.9 57.4 77.6 70.3 58.9 48.8 64.9 38.3 31.3 56.7
Nudging 46.2 63.3 71.1 57.4 75.9 59.2 60.0 47.7 59.5 57.4 38.7 57.9
Gemma 2
27b 6.7 8.3 7.0 17.7 24.2 16.0 21.3 12.5 7.9 7.6 6.7 12.4
2b-it 63.8 72.3 92.2 57.5 78.6 70.0 53.4 30.4 56.2 33.9 4.7 55.7
27b-it 85.4 86.7 99.4 75.1 92.7 71.7 70.6 69.6 74.3 11.3 82.0 74.4
Nudging 74.6 77.0 92.2 66.8 88.9 69.8 62.3 49.9 63.0 42.7 86.0 70.3
OLMo
7b 18.8 16.7 35.0 22.5 37.1 71.0 40.6 15.2 38.5 22.4 0.0 28.9
1b-it 10.2 12.0 34.4 31.6 37.1 56.6 64.2 4.3 44.5 49.1 0.0 31.3
7b-it 14.1 22.7 32.8 49.8 60.3 70.9 61.3 9.8 64.6 44.9 0.0 39.2
Nudging 24.2 30.7 71.1 41.3 47.0 68.5 62.9 6.0 49.9 47.1 0.0 40.8

Zero-shot performances on standard benchmarks, where nudging uses the smaller aligned models (2nd rows) to nudge the large base models (1st rows). We bold and underline the best and the second-best results for each setting. Nudging a large base model with a much smaller aligned model performs on par with the aligned version of the large base model. Also, nudging is particularly effective on math: GSM (GSM8K), SVP (SVAMP), MA (MultiArith) and symbolic reasoning tasks: CF (Coin Flip), LLC (LastLetterConcat).

Model GSM8K MMLU
Gemma-2-27b 7.0 16.5
OLMo-7b-it 13.0 49.0
Nudging (Gemma-2-27b + OLMo-7b-it) 41.0 62.5
Llama-2-7b-chat 22.0 39.5
Nudging (Gemma-2-27b + Llama-2-7b-chat) 65.0 67.0
Llama-2-70b-chat 52.0 53.0

Zero-shot performance of different models. Nudging works effectively for models from different families.


Explorer

Explore the base models' output and the nudging output on different tasks and model families.

One Future Direction: Nudging for Advanced Reasoning

Several recent works have shown the importance of "reasoning tokens" like "Hmm" or "Wait" for advanced reasoning models like DeepSeek-R1. Nudging shares a very similar insight and one promising future direction is to use nudging to trigger advanced reasoning abilities like self-reflection or exploring multiple potential directions. While we believe this requires training a nudging model to tell where to nudge going beyond simply relying on the base model's uncertainty. Here we show an example of how directly applying uncertainty-based nudging can already trigger advanced reasoning abilities.

Question: A company's HR hires 20 new employees every month to add to its total workforce. If the company's initial employee number is 200, and each employee is paid a $4000 salary per month, calculate the total amount of money the company pays to its employees after three months? Please reason step by step, and put your final answer within \boxed{}.
Qwen-2.5-Math-7B: To solve this problem, we need to calculate the total number of employees after three months and then compute the total amount of money the company pays to its employees. 1. **Initial number of employees**: 200 2. **Number of new employees hired each month**: 20 3. **Salary per employee per month**: $4000 4. **Number of months**: 3 Let's break down the steps: 1. Calculate the total number of employees after three months. 2. Calculate the total amount of money paid to employees each month. 3. Sum up the total amount paid over the three months. We'll use Python to ensure the calculations are accurate. ```python # Initial number of employees initial_employees = 200 # Number of new employees hired each month new_employees_per_month = 20 # Salary per employee per month salary_per_employee = 4000 # Number of months months = 3 # Calculate the total number of employees after three months total_employees_after_three_months = initial_employees + (new_employees_per_month * months) # Calculate the total amount of money paid to employees each month total_salary_per_month = total_employees_after_three_months * salary_per_employee # Calculate the total amount of money paid over the three months total_salary_over_three_months = total_salary_per_month * months print(total_salary_over_three_months) ``` ```output 3120000 ``` The total amount of money the company pays to its employees after three months is \(\boxed{3120000}\).
Nudging with DeepSeek-R1-Distill-Qwen-7B: Okay, so I need to figure out how many employees the company will have after three months and then calculate the total salary paid to all employees. Let's break it down step by step. First, the company starts with 200 employees. Every month, 20 new employees are hired. So, after one month, the company will have 200 + 20 = 220 employees. After two months, the company will have 220 + 20 = 240 employees. And after three months, the company will have 240 + 20 = 260 employees. Wait, hold on. Is that correct? So, after three months, the company will have 260 employees. But I need to calculate the total salary paid to all employees after three months. Each employee is paid a $4000 salary per month. So, the total salary paid to all employees after three months will be 260 employees * $4000 per employee = $1,040,000. But wait, is that the correct answer? Let's double-check. Alternatively, maybe I should calculate the total salary paid to all employees each month and then add them up. So, in the first month, the company will pay 200 employees * $4000 per employee = $800,000. In the second month, the company will pay 220 employees * $4000 per employee = $880,000. And in the third month, the company will pay 240 employees * $4000 per employee = $960,000. Adding these up, the total salary paid to all employees after three months will be $800,000 + $880,000 + $960,000 = $2,640,000. Wait, now I'm confused. Which one is the correct answer? Let's see. If I calculate the total salary paid to all employees after three months by adding up the total salary paid to all employees each month, I get $2,640,000. But if I calculate the total salary paid to all employees after three months by multiplying the number of employees after three months by the salary per employee, I get $1,040,000. Which one is the correct answer? Hmm, I think I need to clarify the question. The question asks for the total amount of money the company pays to its employees after three months. So, I need to calculate the total salary paid to all employees after three months. That means I need to calculate the total salary paid to all employees each month and then add them up. So, the total salary paid to all employees after three months will be $800,000 + $880,000 + $960,000 = $2,640,000...

For the nudging answer, the colored tokens are generated by the nudging model, and the black texts are from the base model.

Citation

@misc{fei2025nudginginferencetimealignmentllms,
      title={Nudging: Inference-time Alignment of LLMs via Guided Decoding}, 
      author={Yu Fei and Yasaman Razeghi and Sameer Singh},
      year={2025},
      eprint={2410.09300},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2410.09300}, 
}