Source
ACL
DATE OF PUBLICATION
07/27/2025
Authors
Alexey Dontsov Dmitrii Korzh Alexey Zhavoronkin Boris Mikheev Denis Bobkov Aibek Alanov Oleg Rogov Ivan Oseledets Elena Tutubalina
Share

CLEAR: Character Unlearning in Textual and Visual Modalities

Abstract

Machine Unlearning (MU) is critical for enhancing privacy and security in deep learning models, particularly in large multimodal language models (MLLMs), by removing specific private or hazardous information. While MU has made significant progress in textual and visual modalities, multimodal unlearning (MMU) remains significantly underexplored, partially due to the absence of a suitable open-source benchmark. To address this, we introduce CLEAR, a new benchmark designed to evaluate MMU methods. CLEAR contains 200 fictitious individuals and 3,700 images linked with corresponding question-answer pairs, enabling a thorough evaluation across modalities. We assess 10 MU methods, adapting them for MMU, and highlight new challenges specific to multimodal forgetting. We also demonstrate that simple ℓ1 regularization on LoRA weights significantly mitigates catastrophic forgetting, preserving model performance on retained data. The dataset is available at this https URL




Join AIRI