The Radical AI Podcast podcast

What Causes AI to Fail? with the AI Today Podcast

0:00
53:15
Recuar 15 segundos
Avançar 15 segundos

 what causes AI to fail from a business/industry perspective and beyond? What metrics are used to measure and indicate failure? And how can we improve the field of AI by learning from these failures? 

To answer these questions we interview Kathleen Walch and Ron Schmelzer of Cognilytica’s AI Today podcast.

Ron and Kathleen are both principal analysts, managing partners and founders of Cognilytica. Cognilytica is a research, advisory, and education firm focused on advanced big data analytics, cognitive technologies, and evolving areas of Artificial Intelligence and machine learning. 

Full show notes for this episode can be found at Radicalai.org. 

If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod

Mais episódios de "The Radical AI Podcast"

  • The Radical AI Podcast podcast

    Decolonial AI 101 with Raziye Buse Çetin

    41:53

    What is Decolonial AI? How can we apply a postcolonial lens to AI design? In this episode we interview Raziye Buse Çetin about Colonial, Decolonial, and Postcolonial AI -- and the Newly released Decolonial AI Manyfesto. Buse is an AI policy and ethics researcher and consultant. Her work revolves around ethics, impact, and governance of AI systems. She combines her lived experience with her interest in postcolonial studies, intersectional feminism and science and technology studies (STS) to develop critical thinking about AI technologies and narratives around it.  Full show notes for this episode can be found at Radicalai.org.  If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
  • The Radical AI Podcast podcast

    Design Justice 101 with Sasha Costanza-Chock

    55:08

    What is Design Justice? How can we employ it to disrupt power systems supporting the matrix of domination? In this episode, we interview Sasha Costanza-Chock about the 101 of Design Justice and how we can use it as a force for collective liberation. Sasha Costanza-Chock is a researcher and designer who works to support community-led processes that build shared power, dismantle the matrix of domination, and advance ecological survival. Sasha is the Director of Research & Design at the Algorithmic Justice League and is the author of Design Justice: Community-Led Practices to Build the Worlds We Need. Full show notes for this episode can be found at Radicalai.org.  If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
  • The Radical AI Podcast podcast

    Não percas um episódio de The Radical AI Podcast e subscrevê-lo na aplicação GetPodcast.

    iOS buttonAndroid button
  • The Radical AI Podcast podcast

    What Causes AI to Fail? with the AI Today Podcast

    53:15

     what causes AI to fail from a business/industry perspective and beyond? What metrics are used to measure and indicate failure? And how can we improve the field of AI by learning from these failures?  To answer these questions we interview Kathleen Walch and Ron Schmelzer of Cognilytica’s AI Today podcast. Ron and Kathleen are both principal analysts, managing partners and founders of Cognilytica. Cognilytica is a research, advisory, and education firm focused on advanced big data analytics, cognitive technologies, and evolving areas of Artificial Intelligence and machine learning.  Full show notes for this episode can be found at Radicalai.org.  If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
  • The Radical AI Podcast podcast

    Measurementality #6: Authentic Accountability for Successful AI with Yoav Schlesinger

    43:57

    In this 6th episode of Measurementality we'll be "identifying what counts in the algorithmic age" by analyzing how we can build more authentic systems of accountability for creating AI with Yoav Schlesinger. Yoav Schlesinger is the Principal of Ethical AI Practice for Salesforce.  
  • The Radical AI Podcast podcast

    Predicting Mental Illness Through AI with Stevie Chancellor

    55:54

    How is AI used to predict mental illness? What are the benefits and challenges to its use? In this episode we interview Stevie Chancellor about AI, mental health, and the benefits and challenges of machine learning systems that are used to predict mental illness.  Stevie Chancellor is an Assistant Professor in the Department of Computer Science & Engineering at the University of Minnesota - Twin Cities. Her research combines human-computer interaction and machine learning approaches to build and critically evaluate machine learning systems for pressing social issues, focusing on high-risk health behaviors in online communities. Full show notes for this episode can be found at Radicalai.org.  If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
  • The Radical AI Podcast podcast

    Measurementality #5: Intergenerational Collaboration with Sinead Bovell

    35:48

    In this 5th episode of Measurementality we'll be "identifying what counts in the algorithmic age" by analyzing how existing metrics regarding youth and intergenerational collaboration are being globally measured today.  To discuss this topic we interview Sinead Bovell. Sinead is the founder and CEO of WAYE, a tech education company that prepares the next generation of leaders for a future with advanced technologies, with a focus on non-traditional and minority markets.  
  • The Radical AI Podcast podcast

    Indigenous AI 101 with Jason Edward Lewis

    1:04:35

    What is Indigenous AI and how might it drive our technology design and implementation? To answer this question and more in this episode we interview Jason Edward Lewis about Indigenous AI Protocols and a paper he co-authored entitled “Position Paper on Indigenous Protocol and Artificial Intelligence.” Jason Edward Lewis is a Hawaiian and Samoan digital media theorist, poet, and software designer. Jason also founded Obx Laboratory for Experimental Media and is the University Research Chair in Computational Media and the Indigenous Future Imaginary as well as a Professor of Computation Arts at Concordia University, Montreal. Jason directs the Initiative for Indigenous Futures, and co-directs the Indigenous Futures Research Centre, the Indigenous Protocol and AI Workshops, the Aboriginal Territories in Cyberspace research network, and the Skins Workshops on Aboriginal Storytelling and Video Game Design. Full show notes for this episode can be found at Radicalai.org.  If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod
  • The Radical AI Podcast podcast

    Casteist Technology and Digital Brahminism with Thenmozhi Soundararajan and Seema Hari

    49:29

    What is Casteist Technology and Digital Brahmanism and how can we best engage to enact change?  Join 2021 Radical AI Intern Nikhil Dharmaraj as he interviews Thenmozhi Soundararajan and Seema Hari about technology, casteism, and surveillance. Thenmozhi Soundararajan is a Dalit rights artist, technologist, and theorist. Currently, Thenmozhi is the co-Founder and Executive Director of Equality Labs, a Dalit Civil Rights organization that uses community research, cultural and political organizing, popular education and digital security to build power to end caste apartheid, white supremacy, gender-based violence, and religious intolerance. Seema Hari is an engineer and an anti-caste and anti-colorism activist. Full show notes and guest bios for this episode can be found at Radicalai.org. 
  • The Radical AI Podcast podcast

    Measurementality #4: What are we Optimizing for? with Laura Musikanski and Jonathan Stray

    46:14

    In this 4th episode of Measurementality we'll be "identifying what counts in the algorithmic age" by analyzing how existing metrics regarding human wellbeing along with environmental flourishing are being globally measured today.   Laura Musikanski is the Executive Director of The Happiness Alliance and Chair of IEEE 7010-2020 Jonathan Stray is a Visiting Scholar at Center for Human-Compatible AI and a former research partner at The Partnership on AI as well as being the author of Aligning AI to Human Values means Picking the Right Metrics
  • The Radical AI Podcast podcast

    Feminist AI 101 with Eleanor Drage and Kerry Mackereth

    42:07

    What is Feminist AI and how and why should we design and implement it? To answer this question and more in this episode we interview Eleanor Drage and Kerry Mackereth about the ins and outs of Feminist AI. Eleanor and Kerry are both postdoctoral researchers who are working on the “Gender and Technology” research project at the “University of Cambridge Centre for Gender Studies” and in association with the Leverhulme Centre for the Future of Intelligence. In this project, they are working to provide the AI sector with practical tools to create more equitable AI informed by intersectional feminist knowledge. Full show notes and guest bios for this episode can be found at Radicalai.org.  If you enjoy this episode please make sure to subscribe, submit a rating and review, and connect with us on twitter at twitter.com/radicalaipod

Descobre o mundo dos podcasts com a app gratuita GetPodcast.

Subscreve os teus podcasts preferidos, ouve episódios offline e obtém recomendações fantásticas.

iOS buttonAndroid button