Near-Optimal Algorithms for Omniprediction
Machine learning models can be trained to predict probabilities of uncertain events. A standard application of this paradigm requires obtaining training data, training a model to optimize the utility of a decision maker or agent, and deploying the model to make predictions optimized for that agent. An alternative is to train an agent-agnostic “prediction service” to forecast probabilities of events, and let different agents best-respond to the forecasts. This alternative, first put forward by Gopalan et al. in 2022, is known as "omniprediction". It has clear advantages in terms of model reuse and privacy, but what is lost in terms of efficiency? Can omnipredictors be trained to provide predictions that are nearly as trustworthy as if each agent used the same amount of training data to optimize their own utility? We will describe two settings in which, surprisingly, the answer is yes. Underlying these results is a new measure of forecast calibration called "proper calibration" that is weak enough to be achievable using only logarithmically more samples than the easiest statistical learning tasks, yet strong enough to provide near-optimal guarantees on agents’ regret.
This is joint work with Princewill Okoroafor and Michael P. Kim, to appear in FOCS 2025.