Source
Nature
DATE OF PUBLICATION
03/13/2023
Authors
Artem Shelmanov Dmitry Dylov Daniil Chesakov Alexander Selivanov Oleg Rogov Irina Fedulova
Share

Medical image captioning via generative pretrained transformers

Abstract

The proposed model for automatic clinical image caption generation combines the analysis of radiological scans with structured patient information from the textual records. It uses two language models, the Show-Attend-Tell and the GPT-3, to generate comprehensive and descriptive radiology records. The generated textual summary contains essential information about pathologies found, their location, along with the 2D heatmaps that localize each pathology on the scans. The model has been tested on two medical datasets, the Open-I, MIMIC-CXR, and the general-purpose MS-COCO, and the results measured with natural language assessment metrics demonstrated its efficient applicability to chest X-ray image captioning.

Join AIRI