Attendance restricted to Princeton University faculty, staff and students. Algorithms make predictions about people constantly. The spread of such prediction systems has raised concerns that machine learning algorithms may exhibit problematic behavior, especially against individuals from marginalized groups. This talk will provide an overview of research building a theory of “responsible” machine learning. It will highlight a notion of fairness in prediction, called Multicalibration (ICML’18), which requires predictions to be well-calibrated, not simply overall, but on every group that can be meaningfully identified from data. This “multi-group” approach strengthens the guarantees of group fairness definitions, without incurring the costs (statistical and computational) associated with individual-level protections. Additionally, a new paradigm will be presented for learning, Outcome Indistinguishability (STOC’21), which provides a broad framework for learning pred