Source
COLING GenAIK
DATE OF PUBLICATION
01/19/2025
Authors
Share

On Reducing Factual Hallucinations in Graph-to-Text Generation Using Large Language Models

Abstract

Recent work in Graph-to-Text generation has achieved impressive results, but it still suffersfrom hallucinations in some cases, despite extensivepretraining stages and various methodsfor working with graph data. While the commonlyused metrics for evaluating the qualityof Graph-to-Text models show almost perfectresults, it makes it challenging to compare differentapproaches. This paper demonstrates thechallenges of recent Graph-to-Text systems interms of hallucinations and proposes a simpleyet effective approach to using a general LLM,which has shown state-of-the-art results andreduced the number of factual hallucinations.We provide step-by-step instructions on howto develop prompts for language models and adetailed analysis of potential factual errors inthe generated text.

Join AIRI