Closing the Gap Between Trustworthy and Trusted AI - Incite at Columbia University

Active Project

Closing the Gap Between Trustworthy and Trusted AI

  • Led by The Trust Collaboratory
  • Team
    • Gil Eyal Columbia University
    • Shalmali Joshi Columbia University
    • Chris H. Wiggins Columbia University
    • Cristian Capotescu Columbia University
    • Anna Thieser Columbia University
    • Byungkyu Lee New York University
    • Simone Zhang New York University
  • In Collaboration with Data & Society AIM Lab New York University
  • Funded by
    • Interdisciplinary Seed Grant Columbia University
  • Learn More trustcollaboratory.org

The spread of machine learning algorithms has sparked significant scholarly and public attention, as well as concern about problems such as bias, hallucinations, and misinformation.

The main response from the computer science community and regulatory agencies has been to try to develop “trustworthy AI” that exhibits desirable properties such as accuracy, accountability, fairness, transparency, and explainability.

Complementary to these efforts is an urgent need for well-designed empirical research that distinguishes between three analytically separate yet practically interlinked topics: trustworthy AI, trust in AI, and the consequences of AI for trust in institutions. This theoretical intervention is motivated by a fundamental misalignment between the inherently probabilistic logic of today’s AI systems and the non-probabilistic logic of trusting as an ordinary human activity.

Closing the gap between trustworthy and trusted AI requires empirical research on the factors that shape what, when, and how people trust AI and/or human experts when encountering or using it in naturalistic settings. To achieve this goal, this project will assemble a working group composed of computer/data scientists, social scientists, and humanists, as well as medical and legal systems researchers from Columbia, NYU, and the Data & Society AIM Lab.

The working group will serve as a hub to jumpstart conversations on the topics of trust in AI and the consequences of AI for trust in institutions, design pilot research, explore innovative methodologies necessary for this research, and develop actionable insights to guarantee that these findings inform the development of trustworthy AI.

Related Projects