Tree of Problems: Improving structured problem solving with compositionality

id:

2410.06634

Authors:

Armel Zebaze, Benoît Sagot, Rachel Bawden

Published:

2024-10-09

arXiv:

https://arxiv.org/abs/2410.06634

PDF:

https://arxiv.org/pdf/2410.06634

DOI:

N/A

Journal Reference:

N/A

Primary Category:

cs.CL

Categories:

cs.CL

Comment:

N/A

github_url:

_

abstract

Large Language Models (LLMs) have demonstrated remarkable performance across multiple tasks through in-context learning. For complex reasoning tasks that require step-by-step thinking, Chain-of-Thought (CoT) prompting has given impressive results, especially when combined with self-consistency. Nonetheless, some tasks remain particularly difficult for LLMs to solve. Tree of Thoughts (ToT) and Graph of Thoughts (GoT) emerged as alternatives, dividing the complex problem into paths of subproblems. In this paper, we propose Tree of Problems (ToP), a simpler version of ToT, which we hypothesise can work better for complex tasks that can be divided into identical subtasks. Our empirical results show that our approach outperforms ToT and GoT, and in addition performs better than CoT on complex reasoning tasks. All code for this paper is publicly available here: https://github.com/ArmelRandy/tree-of-problems.

premise

outline

quotes

notes

summary

1. Brief Overview

This paper introduces Tree of Problems (ToP), a novel approach to structured problem-solving using Large Language Models (LLMs). ToP improves upon existing methods like Chain-of-Thought (CoT), Tree of Thoughts (ToT), and Graph of Thoughts (GoT) by simplifying the decomposition of complex problems into a series of simpler, analogous subproblems. The method demonstrates improved performance on various complex reasoning tasks, outperforming previous approaches.

2. Key Points

  • ToP simplifies complex problems into a tree structure of identical subproblems, making it more efficient than ToT and GoT for suitable tasks.

  • Empirically outperforms CoT, ToT, and GoT on complex reasoning tasks.

  • A decomposer, solver, and merger are the core components of the ToP framework.

  • Adaptable to both canonical tasks (independent subproblems) and sequential tasks (dependent subproblems).

  • Demonstrates significant improvements on various benchmarks including BIG-Bench Hard tasks and symbolic reasoning tasks.

  • Scaling up model size improves ToP’s performance.

3. Notable Quotes

No notable quotes were identified in the provided text.

4. Primary Themes

  • Compositionality in Problem Solving: The core theme centers around breaking down complex problems into smaller, manageable subproblems to enhance LLM performance.

  • Improving LLM Reasoning: The paper focuses on enhancing the reasoning capabilities of LLMs by structuring the problem-solving process.

  • Efficiency and Simplicity: ToP aims to provide a simpler and more efficient alternative to existing complex methods like ToT and GoT while achieving superior results.

  • Generalization: The approach attempts to improve the generalization capabilities of LLMs by leveraging the solution of similar subproblems.