In this talk, we explore recent developments in post-selection/ selective conformal prediction, focusing on the challenge of controlling the false coverage-statement rate (FCR). Conformal inference is a well-established tool for constructing prediction intervals, but its application becomes complex when prediction intervals are selectively reported.
We first propose a novel framework in offline scenarios, where prediction intervals are constructed only for selected individuals from unlabelled test data. We discuss the limitations of traditional FCR-adjusted methods, which, while controlling FCR, lead to inflated prediction intervals. To address this, we introduce SCOP (Selective Conditional Conformal Prediction), a new approach that utilizes the selection process on both calibration and test sets to achieve more precise prediction intervals while maintaining rigorous FCR control under both exchangeable and non-exchangeable selection rules.
Building on this, we extend our discussion to the online setting, where selection decisions and prediction intervals must be made in real-time. We present a general algorithm CAP (Calibration after Adaptive Pick), which adaptively selects and calibrates based on historical data, providing robust, real-time FCR control even under distribution shifts.
Through a combination of theoretical insights and empirical results, we demonstrate how these advancements enable more accurate and reliable prediction intervals across various settings. We conclude the talk by discussing some related works of selective prediction inference.
|