
We've spent decades teaching ourselves to communicate with computers via text and clicks. Now, computers are learning to perceive the world like us: through sight and sound. What happens when software needs to sense, interpret, and act in real-time using voice and vision?
This week, Andrew sits down with Russ d'Sa, Co-founder and CEO of LiveKit, whose technology acts as the crucial infrastructure enabling machines to interact using real-time voice and vision, impacting everything from ChatGPT to critical 911 responses.
Explore the transition from text-based protocols to rich, real-time data streams. Russ discusses LiveKit's role in this evolution, the profound implications of AI gaining sensory input, the trajectory from co-pilots to agents, and the unique hurdles engineers face when building for a world beyond simple text transfers.
Check out:
- AI Code Reviews: An Engineering Leader’s Survival Guide
- Survey: Discover Your AI Collaboration Style
Follow the hosts:
Follow today's guest(s):
- Website: livekit.io
- GitHub: github.com/livekit
- X: x.com/livekit
- Russ: x.com/dsa
Referenced in today's show:
- The Rise of Slopsquatting: How AI Hallucinations Are Fueling a New Class of Supply Chain Attacks
- Has the VSCode C/C++ Extension been blocked?
- OpenAI tells judge it would buy Chrome from Google
- Chain-of-Vibes | Pete Hodgson
- Seattle crosswalk signals with deepfake Bezos audio may have been hacked with just a cellphone
Support the show:
- Subscribe to our Substack
- Leave us a review
- Subscribe on YouTube
- Follow us on Twitter or LinkedIn
Offers:
Flere episoder fra "Dev Interrupted"
Gå ikke glip af nogen episoder af “Dev Interrupted” - abonnér på podcasten med gratisapp GetPodcast.