Источник
CIKM
Дата публикации
05.08.2024
Авторы
Иван Оселедец Евгений Фролов Глеб Мезенцев Данил Гусак
Поделиться

RECE: Reduced Cross-Entropy Loss for Large-Catalogue Sequential Recommenders

Аннотация

Scalability is a major challenge in modern recommender systems. Insequential recommendations, full Cross-Entropy (CE) loss achievesstate-of-the-art recommendation quality but consumes excessiveGPU memory with large item catalogs, limiting its practicality.Using a GPU-efficient locality-sensitive hashing-like algorithmfor approximating large tensor of logits, this paper introduces anovel RECE (REduced Cross-Entropy) loss. RECE significantlyreduces memory consumption while allowing one to enjoy thestate-of-the-art performance of full CE loss. Experimental results onvarious datasets show that RECE cuts training peak memory usageby up to 12 times compared to existing methods while retaining orexceeding performance metrics of CE loss. The approach also opensup new possibilities for large-scale applications in other domains.

Присоединяйтесь к AIRI в соцсетях