Episode: Trusting Our Machines — Neera Jain
Episode pub date: 2019-04-02
Might enabling computational aids to “self-correct” when they’re out of sync with people be a path toward their exhibition of recognizably intelligent behavior? In episode 46, Neera Jain from Purdue University discusses in her experiments into monitoring our trust in AI’s abilities so as to drive us more safely, care for our grandparents, and do work that’s just too dangerous for humans. Her article “Computational Modeling of the Dynamics of Human Trust During Human–Machine Interactions” was published on October 23, 2018 in IEEE Transactions on Human-Machine Systems and was co-authored with Wan-Lin Hu, Kumar Akash, and Tahira Reid.
Websites and other resources
“The robot trust tightrope”
The Jain Lab
“A Classification Model for Sensing Human Trust in Machines Using EEG and GSR”
Patrons of Parsing Science gain exclusive access to bonus clips from all our episodes and can also download mp3s of every individual episode.
Support us for as little as $1 per month at Patreon. Cancel anytime.
Patrons can access bonus content here.
Please note that we aren’t a tax-exempt organization, so unfortunately gifts aren’t tax deductible.
Hosts / Producers
Ryan Watkins & Doug Leigh
How to Cite
What’s The Angle? by Shane Ivers
The podcast and artwork embedded on this page are from Parsing Science: The unpublished stories behind the world’s most compelling science, as told by the researchers themselves., which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.