Abstract
RiddleBench, a benchmark of 1,737 puzzles, reveals fundamental weaknesses in state-of-the-art language models, including hallucination cascades and poor self-correction, highlighting the need for more robust reasoning capabilities.
Large Language Models have demonstrated strong performance on many established reasoning benchmarks. However, these benchmarks primarily evaluate structured skills like quantitative problem-solving, leaving a gap in assessing flexible, multifaceted reasoning abilities that are central to human intelligence. These abilities require integrating logical deduction with spatial awareness and constraint satisfaction, which current evaluations do not measure well. To address this, we introduce RiddleBench, a benchmark of 1,737 challenging puzzles in English designed to probe these core reasoning capabilities. Evaluation of state-of-the-art models on RiddleBench shows fundamental weaknesses. Even top proprietary models like Gemini 2.5 Pro, o3, and Claude 4 Sonnet achieve accuracy just above 60% (60.30%, 63.37%, and 63.16%). Analysis further reveals deep failures, including hallucination cascades (accepting flawed reasoning from other models) and poor self-correction due to a strong self-confirmation bias. Their reasoning is also fragile, with performance degrading significantly when constraints are reordered or irrelevant information is introduced. RiddleBench functions as a diagnostic tool for these issues and as a resource for guiding the development of more robust and reliable language models.
Community
Added paper
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- PHANTOM RECALL: When Familiar Puzzles Fool Smart Models (2025)
- MatSciBench: Benchmarking the Reasoning Ability of Large Language Models in Materials Science (2025)
- Systematic Diagnosis of Brittle Reasoning in Large Language Models (2025)
- ELAIPBench: A Benchmark for Expert-Level Artificial Intelligence Paper Understanding (2025)
- EngiBench: A Benchmark for Evaluating Large Language Models on Engineering Problem Solving (2025)
- Reasoning Riddles: How Explainability Reveals Cognitive Limits in Vision-Language Models (2025)
- RAGCap-Bench: Benchmarking Capabilities of LLMs in Agentic Retrieval Augmented Generation Systems (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper