Principled Instructions Are All You Need for Questioning LLaMA-1/2, GPT-3.5/4
- id:
2312.16171
- Authors:
Sondos Mahmoud Bsharat, Aidar Myrzakhan, Zhiqiang Shen
- Published:
2023-12-26
- arXiv:
- PDF:
- DOI:
N/A
- Journal Reference:
N/A
- Primary Category:
cs.CL
- Categories:
cs.CL, cs.AI
- Comment:
Github at: https://github.com/VILA-Lab/ATLAS
- github_url:
_
abstract
This paper introduces 26 guiding principles designed to streamline the process of querying and prompting large language models. Our goal is to simplify the underlying concepts of formulating questions for various scales of large language models, examining their abilities, and enhancing user comprehension on the behaviors of different scales of large language models when feeding into different prompts. Extensive experiments are conducted on LLaMA-1/2 (7B, 13B and 70B), GPT-3.5/4 to verify the effectiveness of the proposed principles on instructions and prompts design. We hope that this work can provide a better guide for researchers working on the prompting of large language models. Project page is available at https://github.com/VILA-Lab/ATLAS.
premise
outline
quotes
notes
summary
Brief Overview
This paper introduces 26 guiding principles designed to improve the process of querying large language models (LLMs). The principles aim to simplify prompt creation and enhance user comprehension of LLM behavior across different model sizes. Extensive experiments were conducted on LLaMA-1/2 (7B, 13B, and 70B) and GPT-3.5/4 to validate the effectiveness of these principles. The research found that using these principles leads to higher quality, more concise, and factual responses compared to standard prompting techniques.
Key Points
Introduced 26 guiding principles for effective LLM prompting.
Principles categorized into five groups: Prompt Structure and Clarity, Specificity and Information, User Interaction and Engagement, Content and Language Style, and Complex Tasks and Coding Prompts.
Experiments conducted on LLaMA-1/2 and GPT-3.5/4 showed significant improvements in response quality and accuracy (57.7% and 36.4% average improvement on GPT-4).
Improvements were more pronounced with larger models.
Principles address issues like conciseness, clarity, contextual relevance, task alignment, bias avoidance, and incremental prompting.
Evaluated using a manually-created benchmark (ATLAS) with human evaluation.
Notable Quotes
“Prompt engineering is the art of communicating with a generative large language model.” - ChatGPT, 2023
Primary Themes
Prompt Engineering: The core theme focuses on optimizing prompt design for improved LLM performance.
LLM Behavior: The research explores how different prompt formulations affect the responses generated by LLMs of varying scales.
Principled Prompting: The main contribution is the development and validation of a set of principled instructions for crafting effective prompts.
Empirical Evaluation: The study rigorously evaluates the proposed principles through experimentation and human evaluation.