Efficient Integration of Reinforcement Learning in Graph Neural Networks-based Recommender Systems
Abstract
Recommendation systems have advanced significantly in recent years, achieving greater accuracy and relevance. However, traditional approaches often suffer from a mismatch between the losses used during training and the metrics used for evaluation. Models are typically trained to minimize a loss function, while their effectiveness during testing is assessed using different ranking metrics, leading to suboptimal recommendation quality. To address this limitation, reinforcement learning (RL) has emerged as a promising solution. Although RL has been applied in recommendation systems, the integration of graph neural networks (GNNs) within this framework remains underexplored. In this study, we bridge this gap by integrating GNNs and RL to enhance ranking accuracy and recommendation quality. We propose two key innovations: (1) leveraging learnable graphs to embed user-item interactions, with RL optimizing user rewards to improve ranking quality, and (2) modifying GNN architectures with skip connections to enhance recommendation accuracy while reducing training time and improving convergence. Our comprehensive analysis on multiple real-world datasets demonstrates the impact of different GNN architectures and their modifications on the effectiveness of recommendation systems. Our findings demonstrate the potential of combining GNNs and RL to overcome the limitations of traditional recommendation models and achieve state-of-the-art performance, with XSimGCL-skip achieving an average improvement of approximately 2.5% over baseline methods.
Similar publications
partnership