Source
IEEE Access
DATE OF PUBLICATION
01/23/2025
Authors
Share

On Risk Assessment for Out-of-Distribution Detection

Abstract

This paper challenges the conventional approach treating of out-of-distribution (OOD) risk as uniform andaimed at reducing OOD risk on average. We argue that managing OOD risk on average fails to accountfor the potential impact of rare, high-consequence events, which can undermine trust in a model even withjust a single OOD incident. First, we show that OOD performance depends on both the rate of outliersand the number of samples processed by a machine learning (ML) model. Second, we introduce a novelperspective that assesses OOD risk by considering the expected maximum risk within a limited samplesize. Our theoretical findings clearly distinguish when OOD detection is essential and when it becomesredundant, allowing efforts to be directed towards improving ID performance once adequate OODrobustnessis achieved. Finally, an analysis of popular computer vision benchmarks reveals that ID errors often dominateoverall risk, highlighting the importance of strong ID performance as a foundation for effective OODdetection. Our framework offers both theoretical insights and practical guidelines for deploying ML modelsin high-stakes applications, where trust and reliability are paramount.

Join AIRI