How General-Purpose Is a Language Model? Usefulness and Safety with Human Prompters in the Wild

Paper by John Burden, Pablo Antonio Moreno Casares, Bao Sheng Loe, José Hernández-Orallo, Seán Ó hÉigeartaigh
Published on 28 June 2022


The new generation of language models is reported to solve some extraordinary tasks the models were never trained for specifically, in few-shot or zero-shot settings. However, these reports usually cherry-pick the tasks, use the best prompts, and unwrap or extract the solutions leniently even if they are followed by nonsensical text. In sum, they are specialised results for one domain, a particular way of using the models and interpreting the results. In this paper, we present a novel theoretical evaluation framework and a distinctive experimental study assessing language models as general-purpose systems when used directly by human prompters --- in the wild. For a useful and safe interaction in these increasingly more common conditions, we need to understand when the model fails because of a lack of capability or a misunderstanding of the user's intents. Our results indicate that language models such as GPT-3 have limited understanding of the human command; far from becoming general-purpose systems in the wild.

Read full paper

Subscribe to our mailing list to get our latest updates