Machine Learning Street Talk podcast

Machine Learning Street Talk

Machine Learning Street Talk

This is the audio podcast for the ML Street Talk YouTube channel at https://www.youtube.com/c/MachineLearningStreetTalk Thanks for checking us out! We think that scientists and engineers are the heroes of our generation. Each week we have a hard-hitting discussion with the leading thinkers in the AI space. Street Talk is unabashedly technical and non-commercial, so you will hear no annoying pitches. Corporate- and MBA-speak is banned on street talk, "data product", "digital transformation" are banned, we promise :) Dr. Tim Scarfe, Dr. Yannic Kilcher and Dr. Keith Duggar.

61 Episoden

  • Machine Learning Street Talk podcast

    #60 Geometric Deep Learning Blueprint (Special Edition)

    3:33:22

    The last decade has witnessed an experimental revolution in data science and machine learning, epitomised by deep learning methods. Many high-dimensional learning tasks previously thought to be beyond reach -- such as computer vision, playing Go, or protein folding -- are in fact tractable given enough computational horsepower. Remarkably, the essence of deep learning is built from two simple algorithmic principles: first, the notion of representation or feature learning and second, learning by local gradient-descent type methods, typically implemented as backpropagation. While learning generic functions in high dimensions is a cursed estimation problem, most tasks of interest are not uniform and have strong repeating patterns as a result of the low-dimensionality and structure of the physical world. Geometric Deep Learning unifies a broad class of ML problems from the perspectives of symmetry and invariance. These principles not only underlie the breakthrough performance of convolutional neural networks and the recent success of graph neural networks but also provide a principled way to construct new types of problem-specific inductive biases. This week we spoke with Professor Michael Bronstein (head of graph ML at Twitter) and Dr. Petar Veličković (Senior Research Scientist at DeepMind), and Dr. Taco Cohen and Prof. Joan Bruna about their new proto-book Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges. See the table of contents for this (long) show at https://youtu.be/bIZB1hIJ4u8 
  • Machine Learning Street Talk podcast

    #59 - Jeff Hawkins (Thousand Brains Theory)

    2:34:51

    The ultimate goal of neuroscience is to learn how the human brain gives rise to human intelligence and what it means to be intelligent. Understanding how the brain works is considered one of humanity’s greatest challenges.  Jeff Hawkins thinks that the reality we perceive is a kind of simulation, a hallucination, a confabulation. He thinks that our brains are a model reality based on thousands of information streams originating from the sensors in our body.  Critically - Hawkins doesn’t think there is just one model but rather; thousands.  Jeff has just released his new book, A thousand brains: a new theory of intelligence. It’s an inspiring and well-written book and I hope after watching this show; you will be inspired to read it too.  https://numenta.com/a-thousand-brains-by-jeff-hawkins/ https://numenta.com/blog/2019/01/16/the-thousand-brains-theory-of-intelligence/ Panel: Dr. Keith Duggar https://twitter.com/DoctorDuggar Connor Leahy https://twitter.com/npcollapse
  • Machine Learning Street Talk podcast

    Verpasse keine Episode von Machine Learning Street Talk und abonniere ihn in der kostenlosen GetPodcast App.

    iOS buttonAndroid button
  • Machine Learning Street Talk podcast

    #58 Dr. Ben Goertzel - Artificial General Intelligence

    2:28:14

    The field of Artificial Intelligence was founded in the mid 1950s with the aim of constructing “thinking machines” - that is to say, computer systems with human-like general intelligence. Think of humanoid robots that not only look but act and think with intelligence equal to and ultimately greater than that of human beings. But in the intervening years, the field has drifted far from its ambitious old-fashioned roots. Dr. Ben Goertzel is an artificial intelligence researcher, CEO and founder of SingularityNET. A project combining artificial intelligence and blockchain to democratize access to artificial intelligence. Ben seeks to fulfil the original ambitions of the field.  Ben graduated with a PhD in Mathematics from Temple University in 1990. Ben’s approach to AGI over many decades now has been inspired by many disciplines, but in particular from human cognitive psychology and computer science perspective. To date Ben’s work has been mostly theoretically-driven. Ben thinks that most of the deep learning approaches to AGI today try to model the brain. They may have a loose analogy to human neuroscience but they have not tried to derive the details of an AGI architecture from an overall conception of what a mind is. Ben thinks that what matters for creating human-level (or greater) intelligence is having the right information processing architecture, not the underlying mechanics via which the architecture is implemented. Ben thinks that there is a certain set of key cognitive processes and interactions that AGI systems must implement explicitly such as; working and long-term memory, deliberative and reactive processing, perc biological systems tend to be messy, complex and integrative; searching for a single “algorithm of general intelligence” is an inappropriate attempt to project the aesthetics of physics or theoretical computer science into a qualitatively different domain. TOC is on the YT show description https://www.youtube.com/watch?v=sw8IE3MX1SY Panel: Dr. Tim Scarfe, Dr. Yannic Kilcher, Dr. Keith Duggar Artificial General Intelligence: Concept, State of the Art, and Future Prospects https://sciendo.com/abstract/journals... The General Theory of General Intelligence: A Pragmatic Patternist Perspective https://arxiv.org/abs/2103.15100
  • Machine Learning Street Talk podcast

    #57 - Prof. Melanie Mitchell - Why AI is harder than we think

    2:31:21

    Since its beginning in the 1950s, the field of artificial intelligence has vacillated between periods of optimistic predictions and massive investment and periods of disappointment, loss of confidence, and reduced funding. Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected.  Professor Melanie Mitchell thinks one reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. YT vid- https://www.youtube.com/watch?v=A8m1Oqz2HKc Main show kick off [00:26:51] Panel: Dr. Tim Scarfe, Dr. Keith Duggar, Letitia Parcalabescu (https://www.youtube.com/c/AICoffeeBreak/)
  • Machine Learning Street Talk podcast

    #56 - Dr. Walid Saba, Gadi Singer, Prof. J. Mark Bishop (Panel discussion)

    1:11:17

    It has been over three decades since the statistical revolution overtook AI by a storm and over two  decades since deep learning (DL) helped usher the latest resurgence of artificial intelligence (AI). However, the disappointing progress in conversational agents, NLU, and self-driving cars, has made it clear that progress has not lived up to the promise of these empirical and data-driven methods. DARPA has suggested that it is time for a third wave in AI, one that would be characterized by hybrid models – models that combine knowledge-based approaches with data-driven machine learning techniques.  Joining us on this panel discussion is polymath and linguist Walid Saba - Co-founder ONTOLOGIK.AI, Gadi Singer - VP & Director, Cognitive Computing Research, Intel Labs and J. Mark Bishop - Professor of Cognitive Computing (Emeritus), Goldsmiths, University of London and Scientific Adviser to FACT360. Moderated by Dr. Keith Duggar and Dr. Tim Scarfe https://www.linkedin.com/in/gadi-singer/ https://www.linkedin.com/in/walidsaba/ https://www.linkedin.com/in/profjmarkbishop/ #machinelearning #artificialintelligence
  • Machine Learning Street Talk podcast

    #55 Self-Supervised Vision Models (Dr. Ishan Misra - FAIR).

    1:36:21

    Dr. Ishan Misra is a Research Scientist at Facebook AI Research where he works on Computer Vision and Machine Learning. His main research interest is reducing the need for human supervision, and indeed, human knowledge in visual learning systems. He finished his PhD at the Robotics Institute at Carnegie Mellon. He has done stints at Microsoft Research, INRIA and Yale. His bachelors is in computer science where he achieved the highest GPA in his cohort.  Ishan is fast becoming a prolific scientist, already with more than 3000 citations under his belt and co-authoring with Yann LeCun; the godfather of deep learning.  Today though we will be focusing an exciting cluster of recent papers around unsupervised representation learning for computer vision released from FAIR. These are; DINO: Emerging Properties in Self-Supervised Vision Transformers, BARLOW TWINS: Self-Supervised Learning via Redundancy Reduction and PAWS: Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples. All of these papers are hot off the press, just being officially released in the last month or so. Many of you will remember PIRL: Self-Supervised Learning of Pretext-Invariant Representations which Ishan was the primary author of in 2019. References; Shuffle and Learn - https://arxiv.org/abs/1603.08561 DepthContrast - https://arxiv.org/abs/2101.02691 DINO - https://arxiv.org/abs/2104.14294 Barlow Twins - https://arxiv.org/abs/2103.03230 SwAV - https://arxiv.org/abs/2006.09882 PIRL - https://arxiv.org/abs/1912.01991 AVID - https://arxiv.org/abs/2004.12943 (best paper candidate at CVPR'21 (just announced over the weekend) - http://cvpr2021.thecvf.com/node/290)   Alexei (Alyosha) Efros http://people.eecs.berkeley.edu/~efros/ http://www.cs.cmu.edu/~tmalisie/projects/nips09/   Exemplar networks https://arxiv.org/abs/1406.6909   The bitter lesson - Rich Sutton http://www.incompleteideas.net/IncIdeas/BitterLesson.html   Machine Teaching: A New Paradigm for Building Machine Learning Systems https://arxiv.org/abs/1707.06742   POET https://arxiv.org/pdf/1901.01753.pdf
  • Machine Learning Street Talk podcast

    #54 Gary Marcus and Luis Lamb - Neurosymbolic models

    2:24:12

    Professor Gary Marcus is a scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. Gary said in his recent next decade paper that — without us, or other creatures like us, the world would continue to exist, but it would not be described, distilled, or understood.  Human lives are filled with abstraction and causal description. This is so powerful. Francois Chollet the other week said that intelligence is literally sensitivity to abstract analogies, and that is all there is to it. It's almost as if one of the most important features of intelligence is to be able to abstract knowledge, this drives the generalisation which will allow you to mine previous experience to make sense of many future novel situations.   Also joining us today is Professor Luis Lamb — Secretary of Innovation for Science and Technology of the State of Rio Grande do Sul, Brazil. His Research Interests are Machine Learning and Reasoning, Neuro-Symbolic Computing, Logic in Computation and Artificial Intelligence, Cognitive and Neural Computation and also AI Ethics and Social Computing. Luis released his new paper Neurosymbolic AI: the third wave at the end of last year. It beautifully articulated the key ingredients needed in the next generation of AI systems, integrating type 1 and type 2 approaches to AI and it summarises all the of the achievements of the last 20 years of research.   We cover a lot of ground in today's show. Explaining the limitations of deep learning, Rich Sutton's the bitter lesson and "reward is enough", and the semantic foundation which is required for us to build robust AI.
  • Machine Learning Street Talk podcast

    #53 Quantum Natural Language Processing - Prof. Bob Coecke (Oxford)

    2:17:39

    Bob Coercke is a celebrated physicist, he's been a Physics and Quantum professor at Oxford University for the last 20 years. He is particularly interested in Structure which is to say, Logic, Order, and Category Theory. He is well known for work involving compositional distributional models of natural language meaning and he is also fascinated with understanding how our brains work. Bob was recently appointed as the Chief Scientist at Cambridge Quantum Computing. Bob thinks that interactions between systems in Quantum Mechanics carries naturally over to how word meanings interact in natural language. Bob argues that this interaction embodies the phenomenon of quantum teleportation. Bob invented ZX-calculus, a graphical calculus for revealing the compositional structure inside quantum circuits - to show entanglement states and protocols in a visually succinct but logically complete way. Von Neumann himself didn't even like his own original symbolic formalism of quantum theory, despite it being widely used! We hope you enjoy this fascinating conversation which might give you a lot of insight into natural language processing.  Tim Intro [00:00:00] The topological brain (Post-record button skit) [00:13:22] Show kick off [00:19:31] Bob introduction [00:22:37] Changing culture in universities [00:24:51] Machine Learning is like electricity [00:31:50] NLP -- what is Bob's Quantum conception? [00:34:50] The missing text problem [00:52:59] Can statistical induction be trusted? [00:59:49] On pragmatism and hybrid systems [01:04:42] Parlour tricks, parsing and information flows [01:07:43] How much human input is required with Bob's method? [01:11:29] Reality, meaning, structure and language [01:14:42] Replacing complexity with quantum entanglement, emergent complexity [01:17:45] Loading quantum data requires machine learning [01:19:49]  QC is happy math coincidence for NLP [01:22:30] The Theory of English (ToE) [01:28:23]  ... or can we learn the ToE? [01:29:56]  How did diagrammatic quantum calculus come about? [01:31:04 The state of quantum computing today [01:37:49]  NLP on QC might be doable even in the NISQ era [01:40:48]  Hype and private investment are driving progress [01:48:34]  Crypto discussion (moved to post-show) [01:50:38]  Kilcher is in a startup (moved to post show) [01:53:40 Debrief [01:55:26] 
  • Machine Learning Street Talk podcast

    #52 - Unadversarial Examples (Hadi Salman, MIT)

    1:48:16

    Performing reliably on unseen or shifting data distributions is a difficult challenge for modern vision systems, even slight corruptions or transformations of images are enough to slash the accuracy of state-of-the-art classifiers. When an adversary is allowed to modify an input image directly, models can be manipulated into predicting anything even when there is no perceptible change, this is known an adversarial example. The ideal definition of an adversarial example is when humans consistently say two pictures are the same but a machine disagrees. Hadi Salman, a Ph.D student at MIT (ex-Uber and Microsoft Research) started thinking about how adversarial robustness  could be leveraged beyond security. He realised that the phenomenon of adversarial examples could actually be turned upside down to lead to more robust models instead of breaking them. Hadi actually utilized the brittleness of neural networks to design unadversarial examples or robust objects which_ are objects designed specifically to be robustly recognized by neural networks.  Introduction [00:00:00] DR KILCHER'S PHD HAT [00:11:18] Main Introduction [00:11:38] Hadi's Introduction [00:14:43] More robust models == transfer better [00:46:41] Features not bugs paper [00:49:13] Manifolds [00:55:51] Robustness and Transferability [00:58:00] Do non-robust features generalize worse than robust? [00:59:52] The unreasonable predicament of entangled features [01:01:57] We can only find adversarial examples in the vicinity [01:09:30] Certifiability of models for robustness [01:13:55] Carlini is coming for you! And we are screwed [01:23:21] Distribution shift and corruptions are a bigger problem than adversarial examples [01:25:34] All roads lead to generalization [01:26:47] Unadversarial examples [01:27:26]
  • Machine Learning Street Talk podcast

    #51 Francois Chollet - Intelligence and Generalisation

    2:01:42

    In today's show we are joined by Francois Chollet, I have been inspired by Francois ever since I read his Deep Learning with Python book and started using the Keras library which he invented many, many years ago. Francois has a clarity of thought that I've never seen in any other human being! He has extremely interesting views on intelligence as generalisation, abstraction and an information conversation ratio. He wrote on the measure of intelligence at the end of 2019 and it had a huge impact on my thinking. He thinks that NNs can only model continuous problems, which have a smooth learnable manifold and that many "type 2" problems which involve reasoning and/or planning are not suitable for NNs. He thinks that many problems have type 1 and type 2 enmeshed together. He thinks that the future of AI must include program synthesis to allow us to generalise broadly from a few examples, but the search could be guided by neural networks because the search space is interpolative to some extent. https://youtu.be/J0p_thJJnoo Tim's Whimsical notes; https://whimsical.com/chollet-show-QQ2atZUoRR3yFDsxKVzCbj

Hol dir die ganze Welt der Podcasts mit der kostenlosen GetPodcast App.

Abonniere alle deine Lieblingspodcasts, höre Episoden auch offline und erhalte passende Empfehlungen für Podcasts, die dich wirklich interessieren.

iOS buttonAndroid button
© radio.de GmbH 2021radio.net logo