Источник
EMNLP
Дата публикации
09.10.2024
Авторы
Виктория Чекалина Анна Руденко Глеб Мезенцев Александр Михалев Александр Панченко Иван Оселедец
Поделиться

SparseGrad: A Selective Method for Efficient Fine-tuning of MLP Layers

Аннотация

The performance of Transformer models has been enhanced by increasing the number of parameters and the length of the processed text. Consequently, fine-tuning the entire model becomes a memory-intensive process. High-performance methods for parameter-efficient fine-tuning (PEFT) typically work with Attention blocks and often overlook MLP blocks, which contain about half of the model parameters. We propose a new selective PEFT method, namely SparseGrad, that performs well on MLP blocks. We transfer layer gradients to a space where only about 1\% of the layer's elements remain significant. By converting gradients into a sparse structure, we reduce the number of updated parameters. We apply SparseGrad to fine-tune BERT and RoBERTa for the NLU task and LLaMa-2 for the Question-Answering task. In these experiments, with identical memory requirements, our method outperforms LoRA and MeProp, robust popular state-of-the-art PEFT approaches.

Присоединяйтесь к AIRI в соцсетях