Toward a taxonomy of trust for probabilistic machine learning

Tamara Broderick, Andrew Gelman, Rachael Meager, Anna L. Smith, Tian Zheng

Research output: Contribution to journalReview articlepeer-review

4 Scopus citations

Abstract

Probabilistic machine learning increasingly informs critical decisions in medicine, economics, politics, and beyond. To aid the development of trust in these decisions, we develop a taxonomy delineating where trust in an analysis can break down: (i) in the translation of real-world goals to goals on a particular set of training data, (ii) in the translation of abstract goals on the training data to a concrete mathematical problem, (iii) in the use of an algorithm to solve the stated mathematical problem, and (iv) in the use of a particular code implementation of the chosen algorithm. We detail how trust can fail at each step and illustrate our taxonomy with two case studies. Finally, we describe a wide variety of methods that can be used to increase trust at each step of our taxonomy. The use of our taxonomy highlights not only steps where existing research work on trust tends to concentrate and but also steps where building trust is particularly challenging.

Original languageEnglish
Article numbereabn3999
JournalScience advances
Volume9
Issue number7
DOIs
StatePublished - Feb 2023

Bibliographical note

Publisher Copyright:
© 2023 The Authors.

ASJC Scopus subject areas

  • General

Fingerprint

Dive into the research topics of 'Toward a taxonomy of trust for probabilistic machine learning'. Together they form a unique fingerprint.

Cite this