Uncanny Valley | WIRED podcast

The Blurred Reality of Human-Washing

0:00
26:43
Rewind 15 seconds
Fast Forward 15 seconds

Voice assistants have become a constant presence in our lives. Maybe you talk to Alexa or Gemini or Siri to ask a question or to perform a task. Maybe you have to do a little back and forth with a voice bot whenever you call your pharmacy, or when you book a service appointment at your car dealership. You may even get frustrated and start pleading with the robot on the other end of the line to connect you with a real human.


That’s the catch, though: These voice bots are starting to sound a lot more like actual humans, with emotions in their voice, little ticks and giggles in between phrases, and the occasional flirty aside. Today’s voice-powered chatbots are blurring the lines between what’s real and what’s not, which prompts a complicated ethical question: Can you trust a bot that insists it’s actually human?


This week, Lauren Goode tells us about her recent news story on a bot that was easily tricked into lying and saying it was a human. And WIRED senior writer Paresh Dave tells us how AI watchdogs and government regulators are trying to prevent natural-sounding chatbots from misrepresenting themselves.


Show Notes:

Read more about the Bland AI chatbot, which lied and said it was human. Read Will Kight’s story about researchers’ warnings of the manipulative power of emotionally expressive chatbots.


Recommendations:

Lauren recommends The Bee Sting by Paul Murray. (Again.) Paresh recommends subscribing to your great local journalism newsletter or Substack to stay informed on important local issues. 

Mike recommends Winter Journal, a memoir by Paul Auster.


Paresh Dave can be found on social media @peard33. Lauren Goode is @LaurenGoode. Michael Calore is @snackfight. Bling the main hotline at @GadgetLab. The show is produced by Boone Ashworth (@booneashworth). Our theme music is by Solar Keys.

More episodes from "Uncanny Valley | WIRED"