Towards Data Science podcast

94. Divya Siddarth - Are we thinking about AI wrong?

0:00
1:02:44
Recuar 15 segundos
Avançar 15 segundos

AI research is often framed as a kind of human-versus-machine rivalry that will inevitably lead to the defeat — and even wholesale replacement of — human beings by artificial superintelligences that have their own sense of agency, and their own goals.

Divya Siddarth disagrees with this framing. Instead, she argues, this perspective leads us to focus on applications of AI that are neither as profitable as they could be, nor safe enough to prevent us from potentially catastrophic consequences of dangerous AI systems in the long run. And she ought to know: Divya is an associate political economist and social technologist in the Office of the CTO at Microsoft.

She’s also spent a lot of time thinking about what governments can — and are — doing to shift the framing of AI away from centralized systems that compete directly with humans, and toward a more cooperative model, which would see AI as a kind of facilitation tool that gets leveraged by human networks. Divya points to Taiwan as an experiment in digital democracy that’s doing just that.

Mais episódios de "Towards Data Science"

  • Towards Data Science podcast

    98. Mike Tung - Are knowledge graphs AI’s next big thing?

    48:56

    As impressive as they are, language models like GPT-3 and BERT all have the same problem: they’re trained on reams of internet data to imitate human writing. And human writing is often wrong, biased, or both, which means language models are trying to emulate an imperfect target. Language models often babble, or make up answers to questions they don’t understand. And it can make them unreliable sources of truth. Which is why there’s been increased interest in alternative ways to retrieve information from large datasets — approaches that include knowledge graphs. Knowledge graphs encode entities like people, places and objects into nodes, which are then connected to other entities via edges, which specify the nature of the relationship between the two. For example, a knowledge graph might contain a node for Mark Zuckerberg, linked to another node for Facebook, via an edge that indicates that Zuck is Facebook’s CEO. Both of these nodes might in turn be connected to dozens, or even thousands of others, depending on the scale of the graph. Knowledge graphs are an exciting path ahead for AI capabilities, and the world’s largest knowledge graphs are trained by a company called Diffbot, whose CEO Mike Tung joined me for this episode of the podcast to discuss where knowledge graphs can improve on more standard techniques, and why they might be a big part of the future of AI. --- Intro music by: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc ---  0:00 Intro 1:30 The Diffbot dynamic 3:40 Knowledge graphs 7:50 Crawling the internet 17:15 What makes this time special? 24:40 Relation to neural networks 29:30 Failure modes 33:40 Sense of competition 39:00 Knowledge graphs for discovery 45:00 Consensus to find truth 48:15 Wrap-up
  • Towards Data Science podcast

    97. Anthony Habayeb - The present and future of AI regulation

    49:31

    Corporate governance of AI doesn’t sound like a sexy topic, but it’s rapidly becoming one of the most important challenges for big companies that rely on machine learning models to deliver value for their customers. More and more, they’re expected to develop and implement governance strategies to reduce the incidence of bias, and increase the transparency of their AI systems and development processes. Those expectations have historically come from consumers, but governments are starting impose hard requirements, too. So for today’s episode, I spoke to Anthony Habayeb, founder and CEO of Monitaur, a startup focused on helping businesses anticipate and comply with new and upcoming AI regulations and governance requirements. Anthony’s been watching the world of AI regulation very closely over the last several years, and was kind enough to share his insights on the current state of play and future direction of the field. ---  Intro music: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc ---  Chapters: - 0:00 Intro - 1:45 Anthony’s background - 6:20 Philosophies surrounding regulation - 14:50 The role of governments - 17:30 Understanding fairness - 25:35 AI’s PR problem - 35:20 Governments’ regulation - 42:25 Useful techniques for data science teams - 46:10 Future of AI governance - 49:20 Wrap-up
  • Towards Data Science podcast

    Não percas um episódio de Towards Data Science e subscrevê-lo na aplicação GetPodcast.

    iOS buttonAndroid button
  • Towards Data Science podcast

    96. Jan Leike - AI alignment at OpenAI

    1:05:17

    The more powerful our AIs become, the more we’ll have to ensure that they’re doing exactly what we want. If we don’t, we risk building AIs that use dangerously creative solutions that have side-effects that could be undesirable, or downright dangerous. Even a slight misalignment between the motives of a sufficiently advanced AI and human values could be hazardous. That’s why leading AI labs like OpenAI are already investing significant resources into AI alignment research. Understanding that research is important if you want to understand where advanced AI systems might be headed, and what challenges we might encounter as AI capabilities continue to grow — and that’s what this episode of the podcast is all about. My guest today is Jan Leike, head of AI alignment at OpenAI, and an alumnus of DeepMind and the Future of Humanity Institute. As someone who works directly with some of the world’s largest AI systems (including OpenAI’s GPT-3) Jan has a unique and interesting perspective to offer both on the current challenges facing alignment researchers, and the most promising future directions the field might take. ---  Intro music: ➞ Artist: Ron Gelinas ➞ Track Title: Daybreak Chill Blend (original mix) ➞ Link to Track: https://youtu.be/d8Y2sKIgFWc ---  Chapters:   0:00 Intro 1:35 Jan’s background 7:10 Timing of scalable solutions 16:30 Recursive reward modeling 24:30 Amplification of misalignment 31:00 Community focus 32:55 Wireheading 41:30 Arguments against the democratization of AIs 49:30 Differences between capabilities and alignment 51:15 Research to focus on 1:01:45 Formalizing an understanding of personal experience 1:04:04 OpenAI hiring 1:05:02 Wrap-up
  • Towards Data Science podcast

    95. Francesca Rossi - Thinking, fast and slow: AI edition

    46:42

    The recent success of large transformer models in AI raises new questions about the limits of current strategies: can we expect deep learning, reinforcement learning and other prosaic AI techniques to get us all the way to humanlike systems with general reasoning abilities? Some think so, and others disagree. One dissenting voice belongs to Francesca Rossi, a former professor of computer science, and now AI Ethics Global Leader at IBM. Much of Francesca’s research is focused on deriving insights from human cognition that might help AI systems generalize better. Francesca joined me for this episode of the podcast to discuss her research, her thinking, and her thinking about thinking.
  • Towards Data Science podcast

    94. Divya Siddarth - Are we thinking about AI wrong?

    1:02:44

    AI research is often framed as a kind of human-versus-machine rivalry that will inevitably lead to the defeat — and even wholesale replacement of — human beings by artificial superintelligences that have their own sense of agency, and their own goals. Divya Siddarth disagrees with this framing. Instead, she argues, this perspective leads us to focus on applications of AI that are neither as profitable as they could be, nor safe enough to prevent us from potentially catastrophic consequences of dangerous AI systems in the long run. And she ought to know: Divya is an associate political economist and social technologist in the Office of the CTO at Microsoft. She’s also spent a lot of time thinking about what governments can — and are — doing to shift the framing of AI away from centralized systems that compete directly with humans, and toward a more cooperative model, which would see AI as a kind of facilitation tool that gets leveraged by human networks. Divya points to Taiwan as an experiment in digital democracy that’s doing just that.
  • Towards Data Science podcast

    93. 2021: A year in AI (so far) - Reviewing the biggest AI stories of 2021 with our friends at the Let’s Talk AI podcast

    43:31

    2020 was an incredible year for AI. We saw powerful hints of the potential of large language models for the first time thanks to OpenAI’s GPT-3, DeepMind used AI to solve one of the greatest open problems in molecular biology, and Boston Dynamics demonstrated their ability to blend AI and robotics in dramatic fashion. Progress in AI is accelerating exponentially, and though we’re just over halfway through 2021, this year is already turning into another one for the books. So we decided to partner with our friends over at Let’s Talk AI, a podcast co-hosted by Stanford PhD and former Googler Sharon Zhou, and Stanford PhD student Andrey Kurenkov, that covers current events in AI. This was a fun chat, and a format we’ll definitely be playing with more in the future :)
  • Towards Data Science podcast

    92. Daniel Filan - Peering into neural nets for AI safety

    1:06:02

    Many AI researchers think it’s going to be hard to design AI systems that continue to remain safe as AI capabilities increase. We’ve seen already on the podcast that the field of AI alignment has emerged to tackle this problem, but a related effort is also being directed at a separate dimension of the safety problem: AI interpretability. Our ability to interpret how AI systems process information and make decisions will likely become an important factor in assuring the reliability of AIs in the future. And my guest for this episode of the podcast has focused his research on exactly that topic. Daniel Filan is an AI safety researcher at Berkeley, where he’s supervised by AI pioneer Stuart Russell. Daniel also runs AXRP, a podcast dedicated to technical AI alignment research.
  • Towards Data Science podcast

    91. Peter Gao - Self-driving cars: Past, present and future

    1:01:21

    Cruise is a self-driving car startup founded in 2013 — at a time when most people thought of self-driving cars as the stuff of science fiction. And yet, just three years later, the company was acquired by GM for over a billion dollars, having shown itself to be a genuine player in the race to make autonomous driving a reality. Along the way, the company has had to navigate and adapt to a rapidly changing technological landscape, mixing and matching old ideas from robotics and software engineering with cutting edge techniques like deep learning. My guest for this episode of the podcast was one of Cruise’s earliest employees. Peter Gao is a machine learning specialist with deep experience in the self-driving car industry, and is also the co-founder of Aquarium Learning, a Y Combinator-backed startup that specializes in improving the performance of machine learning models by fixing problems with the data they’re trained on. We discussed Peter’s experiences in the self-driving car industry, including the innovations that have spun out of self-driving car tech, as well as some of the technical and ethical challenges that need to be overcome to make self-driving cars hit mainstream use around the world.
  • Towards Data Science podcast

    90. Jeffrey Ding - China’s AI ambitions and why they matter

    49:07

    There are a lot of reasons to pay attention to China’s AI initiatives. Some are purely technological: Chinese companies are producing increasingly high-quality AI research, and they’re poised to become even more important players in AI over the next few years. For example, Huawei recently put together their own version of OpenAI’s massive GPT-3 language model — a feat that leveraged massive scale compute that pushed the limits of current systems, calling for deep engineering and technical know-how. But China’s AI ambitions are also important geopolitically. In order to build powerful AI systems, you need a lot of compute power. And in order to get that, you need a lot of computer chips, which are notoriously hard to manufacture. But most of the world’s computer chips are currently made in democratic Taiwan, which China claims as its own territory. You can see how quickly this kind of thing can lead to international tension. Still, the story of US-China AI isn’t just one of competition and decoupling, but also of cooperation — or at least, that’s the case made by my guest today, China AI expert and Stanford researcher Jeffrey Ding. In addition to studying Chinese AI ecosystem as part of his day job, Jeff published the very popular China AI newsletter, which offers a series of translations and analyses of Chinese language articles about AI. Jeff acknowledges the competitive dynamics of AI research, but argues that focusing only on controversial applications of AI — like facial recognition and military applications — causes us to ignore or downplay areas where real collaboration can happen, like language translation for example.
  • Towards Data Science podcast

    89. Pointing AI in the right direction - A cross-over episode with the Banana Data podcast!

    36:35

    This special episode of the Towards Data Science podcast is a cross-over with our friends over at the Banana Data podcast. We’ll be zooming out and talking about some of the most important current challenges AI creates for humanity, and some of the likely future directions the technology might take.

Descobre o mundo dos podcasts com a app gratuita GetPodcast.

Subscreve os teus podcasts preferidos, ouve episódios offline e obtém recomendações fantásticas.

iOS buttonAndroid button