Trust in Autonomous Labs - Incite at Columbia University
Trust in Autonomous Labs
- Led by The Trust Collaboratory
- Team
- Learn More trustcollaboratory.org
These systems are assisted by computational tools designed and programmed with a high level of precision, accuracy, and resilience. Autonomous labs are associated with the rapid progress of algorithm efficiency because they enable computational exploration of chemical space to design new materials.
This project aims to explore the implications of autonomous labs in knowledge and society. More specifically, it examines how trust is constructed among scientists working in chemical and material science research, bioengineering, bottom-up synthetic biology, and precision oncology.
Trust in Autonomous Labs also produces empirical evidence of discourses, practices, and ethical principles guiding the knowledge production in autonomous labs and explores pathways experts adopt to build broader societal trust in autonomous labs between researchers from multiple fields, technology developers, regulators, policy-makers, and society.
Related Projects
-
go to Criminal Legal Algorithms, Technology, and Expertise
Criminal Legal Algorithms, Technology, and ExpertiseInvestigating how carceral algorithms destabilize work practices, legal frameworks, and the legitimacy of expert authority.
-
go to Listening Tables
Listening TablesCreating spaces on Columbia University's campus to navigate conflicts with mutual respect, empathy, and a commitment to rebuilding trust.
-
go to Closing the Gap Between Trustworthy and Trusted AI
Closing the Gap Between Trustworthy and Trusted AIJumpstarting conversations about trust in AI and its impact on trust in institutions. Funded by Columbia University
-
go to Covid-19 and Trust in Science
Covid-19 and Trust in ScienceDocumenting the experiences of Post-Covid Syndrome patients in the United States, Brazil, and China. Funded by Meta