Source
HAIS
DATE OF PUBLICATION
08/29/2023
Authors
Alexander Panov Anfisa Chuganskaya Alexey Kovalev
Share

The Problem of Concept Learning and Goals of Reasoning in Large Language Models

Abstract

Modern large language models (LLMs) show good performance in the zero-shot or few-shot learning. This ability ability is a significant result even on tasks for which the models have not been trained is in part due to the fact that by learning from textual internet-scale data, such models build a semblance of a model of the world. However, the question of whether the entities on which that the model operates are concepts in the psychological sense remains open. Relying on conceptual reasoning schemes allows to increase the safety of models in solving complex problems. To address this question, we propose to use standard psychodiagnostic techniques to assess the quality of conceptual thinking of models. We test this hypothesis, by conducting experiments on a dataset adapted for LLMs from the psychological techniques of Kettel and Rubinstein and comparing the effectiveness of each of them. In this paper, we have shown that it is possible to distinguish several types of model errors in incorrect answers to standard tasks on conceptual thinking and to evaluate the type according to the classifications of distortions of conceptual thinking adopted in cultural and historical approaches in psychology. This makes it possible to use the tool of psychodiagnostic techniques not only to evaluate the effectiveness of models, but also to develop training procedures based on such tasks.

Join AIRI