H-ARC: A Robust Estimate of Human Performance on the Abstraction and Reasoning Corpus Benchmark
- id:
2409.01374
- Authors:
Solim LeGris, Wai Keen Vong, Brenden M. Lake, Todd M. Gureckis
- Published:
2024-09-02
- arXiv:
- PDF:
- DOI:
N/A
- Journal Reference:
N/A
- Primary Category:
cs.AI
- Categories:
cs.AI
- Comment:
12 pages, 7 figures
- github_url:
_
abstract
The Abstraction and Reasoning Corpus (ARC) is a visual program synthesis benchmark designed to test challenging out-of-distribution generalization in humans and machines. Since 2019, limited progress has been observed on the challenge using existing artificial intelligence methods. Comparing human and machine performance is important for the validity of the benchmark. While previous work explored how well humans can solve tasks from the ARC benchmark, they either did so using only a subset of tasks from the original dataset, or from variants of ARC, and therefore only provided a tentative estimate of human performance. In this work, we obtain a more robust estimate of human performance by evaluating 1729 humans on the full set of 400 training and 400 evaluation tasks from the original ARC problem set. We estimate that average human performance lies between 73.3% and 77.2% correct with a reported empirical average of 76.2% on the training set, and between 55.9% and 68.9% correct with a reported empirical average of 64.2% on the public evaluation set. However, we also find that 790 out of the 800 tasks were solvable by at least one person in three attempts, suggesting that the vast majority of the publicly available ARC tasks are in principle solvable by typical crowd-workers recruited over the internet. Notably, while these numbers are slightly lower than earlier estimates, human performance still greatly exceeds current state-of-the-art approaches for solving ARC. To facilitate research on ARC, we publicly release our dataset, called H-ARC (human-ARC), which includes all of the submissions and action traces from human participants.
premise
outline
quotes
notes
summary
1. Brief Overview
This paper presents H-ARC, a dataset containing human performance data on the Abstraction and Reasoning Corpus (ARC) benchmark. The study aimed to provide a robust estimate of human performance on ARC by evaluating 1729 humans on the full set of 800 tasks (400 training and 400 evaluation). The results show that human performance significantly exceeds current state-of-the-art AI approaches, with humans achieving 76.2% accuracy on the training set and 64.2% on the evaluation set. The dataset is publicly released to facilitate further research.
2. Key Points
Evaluated 1729 humans on the full 800-task ARC dataset.
Estimated average human accuracy: 76.2% (training), 64.2% (evaluation).
Human performance significantly surpasses current state-of-the-art AI.
790 out of 800 tasks were solvable by at least one person in three attempts.
H-ARC dataset (human submissions and action traces) is publicly released.
Evaluation tasks found significantly harder than training tasks.
Analysis of human error patterns compared to AI models.
3. Notable Quotes
No notable quotes were identified.
4. Primary Themes
Benchmarking Human Performance: The study’s primary focus is establishing a reliable benchmark for human performance on a complex reasoning task.
Human-AI Comparison: The results highlight the significant gap between human and AI performance on ARC.
Dataset Release: Making the H-ARC dataset publicly available promotes further research and development in AI.
Error Analysis: Comparing human and AI error patterns provides insight into the differences in problem-solving approaches.
Cognitive Science Implications: The data offers valuable insights into human problem-solving strategies.