Alice in Wonderland: Simple Tasks Showing Complete Reasoning Breakdown in State-Of-the-Art Large Language Models

id:

2406.02061

Authors:

Marianna Nezhurina, Lucia Cipolina-Kun, Mehdi Cherti, Jenia Jitsev

Published:

2024-06-04

arXiv:

https://arxiv.org/abs/2406.02061

PDF:

https://arxiv.org/pdf/2406.02061

DOI:

N/A

Journal Reference:

N/A

Primary Category:

cs.LG

Categories:

cs.LG, cs.AI, cs.CL

Comment:

v2.01. Minor edits. Further experiments on various AIW problem variations. AIW “Alice Female Power Boost”, AIW Extension (AIW Ext). Including recent Claude 3.5 Sonnet and Qwen 2 72B Instruct results

github_url:

_

abstract

Large Language Models (LLMs) are often described as being instances of foundation models - that is, models that transfer strongly across various tasks and conditions in few-show or zero-shot manner, while exhibiting scaling laws that predict function improvement when increasing the pre-training scale. These claims of excelling in different functions and tasks rely on measurements taken across various sets of standardized benchmarks showing high scores for such models. We demonstrate here a dramatic breakdown of function and reasoning capabilities of state-of-the-art models trained at the largest available scales which claim strong function, using a simple, short, conventional common sense problem (AIW problem) formulated in concise natural language, easily solvable by humans. The breakdown is dramatic, as models show strong fluctuations across even slight problem variations that should not affect problem solving, also expressing strong overconfidence in the wrong solutions, often backed up by plausible sounding explanation-like confabulations. Various standard interventions in an attempt to get the right solution, like various type of enhanced prompting, or urging the models to reconsider the wrong solutions again by multi step re-evaluation, fail. We take these initial observations to the scientific and technological community to stimulate urgent re-assessment of the claimed capabilities of current generation of LLMs. Such re-assessment also requires common action to create standardized benchmarks that would allow proper detection of such basic reasoning deficits that obviously manage to remain undiscovered by current state-of-the-art evaluation procedures and benchmarks. Code for reproducing experiments in the paper and raw experiments data can be found at https://github.com/LAION-AI/AIW

premise

outline

quotes

notes

summary

This paper investigates the reasoning capabilities of state-of-the-art Large Language Models (LLMs) by introducing a simple common-sense problem, the “Alice in Wonderland” (AIW) problem. The authors demonstrate a significant breakdown in the reasoning abilities of these models, even with slight variations in the problem’s wording. The LLMs often exhibit overconfidence in their incorrect answers, providing plausible-sounding but nonsensical explanations. The study highlights the inadequacy of current benchmark evaluations in capturing these fundamental reasoning deficits and calls for a re-assessment of LLM capabilities and the development of more robust benchmarks.

Brief overview

The research explores the limitations of current state-of-the-art LLMs in solving simple common-sense reasoning problems. Using the AIW problem, the authors demonstrate a dramatic failure of these models, which contrasts sharply with their high scores on standard benchmarks. Various prompting techniques and model interventions fail to improve performance, revealing a lack of robustness and highlighting the need for improved evaluation metrics.

Key points

  • The AIW problem, a simple common-sense question easily solvable by humans, reveals severe reasoning deficits in state-of-the-art LLMs.

  • Even slight variations in the AIW problem significantly impact model performance, demonstrating a lack of robustness.

  • LLMs often express strong overconfidence in their incorrect answers, providing plausible but nonsensical explanations (confabulations).

  • Standard benchmark evaluations fail to adequately assess the reasoning capabilities of LLMs, masking significant weaknesses.

  • The observed failure is not easily addressed through standard interventions like enhanced prompting or iterative refinement.

  • Larger language models generally perform better than smaller ones on this task, but still exhibit significant failures.

  • The authors propose creating standardized benchmarks that better detect basic reasoning deficits.

Primary themes

  • Limitations of LLMs: The primary theme is the exposure of significant limitations in the reasoning capabilities of current LLMs, despite their success on other tasks.

  • Benchmark inadequacy: The paper strongly critiques the inadequacy of current benchmarks for evaluating reasoning capabilities.

  • Need for improved evaluation: The authors emphasize the urgent need for improved evaluation methods and benchmarks that better capture the nuances of reasoning abilities in LLMs.

  • Robustness and generalization: The lack of robustness and generalization in LLMs is a key focus, showcasing how small changes in problem formulation can lead to drastic performance drops.

  • Overconfidence: The overconfidence of LLMs in their incorrect answers, often accompanied by fabricated explanations, is highlighted as a significant issue.