Closing the Gap Between Trustworthy and Trusted AI - Incite at Columbia University
Closing the Gap Between Trustworthy and Trusted AI
- Led by The Trust Collaboratory
- Team
- In Collaboration with Data & Society AIM Lab New York University
-
Funded by
- Interdisciplinary Seed Grant Columbia University
- Learn More trustcollaboratory.org
The main response from the computer science community and regulatory agencies has been to try to develop “trustworthy AI” that exhibits desirable properties such as accuracy, accountability, fairness, transparency, and explainability.
Complementary to these efforts is an urgent need for well-designed empirical research that distinguishes between three analytically separate yet practically interlinked topics: trustworthy AI, trust in AI, and the consequences of AI for trust in institutions. This theoretical intervention is motivated by a fundamental misalignment between the inherently probabilistic logic of today’s AI systems and the non-probabilistic logic of trusting as an ordinary human activity.
Closing the gap between trustworthy and trusted AI requires empirical research on the factors that shape what, when, and how people trust AI and/or human experts when encountering or using it in naturalistic settings. To achieve this goal, this project will assemble a working group composed of computer/data scientists, social scientists, and humanists, as well as medical and legal systems researchers from Columbia, NYU, and the Data & Society AIM Lab.
The working group will serve as a hub to jumpstart conversations on the topics of trust in AI and the consequences of AI for trust in institutions, design pilot research, explore innovative methodologies necessary for this research, and develop actionable insights to guarantee that these findings inform the development of trustworthy AI.
Related Projects
-
go to Covid-19 and Trust in Science
Covid-19 and Trust in ScienceDocumenting the experiences of Post-Covid Syndrome patients in the United States, Brazil, and China. Funded by Meta
-
go to Listening Tables
Listening TablesCreating spaces on Columbia University's campus to navigate conflicts with mutual respect, empathy, and a commitment to rebuilding trust.
-
go to Trust in Autonomous Labs
Trust in Autonomous LabsExploring the implications of autonomous labs in knowledge and society.
-
go to Criminal Legal Algorithms, Technology, and Expertise
Criminal Legal Algorithms, Technology, and ExpertiseInvestigating how carceral algorithms destabilize work practices, legal frameworks, and the legitimacy of expert authority.