Prompting #RE2 enhances LLM reasoning abilities

Andrea Belvedere
2 min readSep 23, 2024

--

Enhancing LLM Reasoning with RE2

Enhance LLM reasoning with RE2, a technique that prompts language models to “reread” the question before providing an answer, allowing for a significant improvement in reasoning capabilities. LLMs (Large Language Models) often struggle with complex reasoning tasks, as they tend to process questions in a single pass without revisiting the context.

The problem arises when these models encounter questions with multiple layers of complexity, where important information may be overlooked. The lack of “rereading” often leads to misinterpretations or inaccurate answers. Traditional prompting methods, such as the Chain-of-Thought (CoT), tend to suffer from this limitation, as they follow a linear flow without revisiting the key elements of the question.

Enhancing LLM reasoning with RE2: an innovative approach

The #RE2 (Re-Reading) prompting proposed in this study is a simple yet effective approach in which the model is prompted to reread the question before initiating the reasoning process. This is done through a structured prompt that includes:

  • The original question
  • An explicit invitation to reread the question
  • The instruction “Think step by step.”

This simple scheme has proven effective in improving the model’s understanding of the context and increasing the accuracy of its responses. While traditional prompting methods like CoT tend to guide the model along a predetermined path of thought, #RE2 introduces a dimension of revision and reflection.

The contrast between the linear approach (CoT) and the iterative one (#RE2) highlights how the latter can achieve better results in complex reasoning tasks.

However, as noted, #RE2 can become less practical with excessively complex prompts or those composed of many micro-instructions, where rereading might add an overload.

Study Results on improving LLM reasoning with RE2

The study’s data shows that #RE2 led to significant improvements in LLM performance, particularly in arithmetic, logical, and symbolic reasoning tasks. In several benchmarks, models using #RE2 prompting outperformed those using traditional prompting.

The #RE2 prompting represents a step forward in the ability of LLMs to tackle complex reasoning tasks. Its strength lies in its simplicity and ability to review context, providing more accurate and coherent answers.

However, like any technique, it has its limits and should be applied judiciously, especially in cases where prompt complexity might nullify the rereading benefits.

The key to successful prompting lies in balancing accuracy with efficiency. #RE2 offers an interesting perspective on how to mimic the iterative nature of human reasoning, but its optimal application requires further research and adaptation based on different usage contexts.

Ref: https://arxiv.org/pdf/2309.06275

--

--

Andrea Belvedere

Tech Writer at New Technology, Blockchain & AI. From Italy