A podcast about neuroscience, artificial intelligence, and science more broadly, run by a group of computational neuroscientists.
Models of the Mind: How physics, engineering and mathematics have shaped our understanding of the brain
1:18:47Grace wrote a book! And she talked to Brain Inspired host Paul Middlebrooks about it. The book is about the many different ways mathematical methods have influenced neuroscience, from models of single cells all the way up to equations to explain behavior. You can learn more about the book and how to get it in ebook, audiobook, and hard cover worldwide by visiting tinyurl.com/h9dn4bw7 On this cross-posting of Brain Inspired, Grace talks about the book and the field of computational neuroscience more generally. Give it a listen and go check out other episodes of Brain Inspired for more great conversations.
E50: Brain Organoids
1:00:28Most neuroscience research takes place in a full, live animal. But brain organoids are different. Brain organoids are three-dimensional blobs of brain grown from human stem cells and they offer novel access to the study of human brain development. On this episode we go beyond our computational comfort zone to talk about the history of stem cells, the potion of chemicals needed to get these little blobs to grow, and the extent to which they mimic features of the human brain when they do. We also discuss the promise of studying and treating disease through personalized organoids, and how this gets hard for higher level disorders like schizophrenia. Then we get into questions of embodiment and if giving these organoids more means to interact with the world would make them better models of the brain and of information processing. Finally we get to the ethics of it all, and find that bioethicists these days are actually chill AF. Throughout, we find out that Josh is not surprised by any of this, and we tackle the pressing moral issue of our time: is it OK to eat your thermostat?
E49: How Important is Learning?
1:06:42The age-old debate of nature versus nurture is now being played out between artificial intelligence and neuroscience. The dominant approach in AI, machine learning, puts an emphasis on adapting processing to fit the data at hand. Animals, on the other hand, seem to have a lot of built in structure and tendencies, that mean they function well right out of the womb. So are most of our abilities the result of genetically-encoded instructions, honed over generations of evolution? Or are our interactions with the environment key? We discuss the research that has been done on human brain development to try to get at the answers to these questions. We take about the compromise position that says animals may be "born to learn"---that is, innate tendencies help make sure the right training data is encountered and used efficiently during development. We also get into what all this means for AI and whether machine learning researchers should be learning less. Throughout, we ask if humans are special, argue that development can happen without learning, and discuss the special place of the octopus in the animal kingdom.
E48: Studying the Brain in Light of Evolution
59:58The brain is the result of evolution. A lot of evolution. Most neuroscientists don't really think about this fact. Should we? On this episode we talk about two papers---one focused on brains and the other on AI---that argue that following evolution is the path to success. As part of this argument, they make the point that, in evolution, each stage along the way needs to be fully functional, which impacts the shape and role of the brain. As a result, the system is best thought of as a whole---not chunked into perception, cognition and action, as many psychologists and neuroscientists are wont to do. In discussing these arguments, we talk about the role of representations in intelligence, go through a bit of the evolution of the nervous system, and remind ourselves that evolution does not necessarily optimize. Throughout, we ask how this take on neuroscience impacts our own work and try to avoid saying "represents".
E47: Deep Learning to Understand the Brain
1:05:00The recent advances in deep learning have done more than just make money for startups and tech companies. They've also infiltrated neuroscience! Deep neural networks---models originally inspired by the basics of the nervous system---are finding ever more applications in the quest to understand the brain. We talk about many of those uses in the episode. After first describing more traditional approaches to modeling behavior, we talk about how neuroscientists compare deep net models to real brains using both performance and neural activity. We then get into the attempts by the field of machine learning to understand their own models and how ML and neuroscience can share methods (and maybe certain cultural tendencies). Finally we talk about the use of deep nets to generate stimuli specifically tailored to drive real neurons to their extremes. Throughout, we notice how deep learning is "complicating the narrative", ask "are deep nets normative models?", and struggle to talk about a topic we actually know about.
E46: What We Learn from Model Organisms
1:01:05From worms to flies, and mice to macaques, neuroscientists study a range (but not very large range...) of animals when they study "the brain". On this episode we ask a lot of questions about these model organisms, such as: how are they chosen? should we use more diverse ones? and what is a model organism actually a model of? We also talk about how the development of genetic tools for certain animals, like mice, have made them the dominant lab animal and the difficulty of bringing a new model species onto the scene. We also get into the special role that simple organisms, like C. elegans, play and how we can extrapolate findings from these small animals to more complex ones. Throughout, special guest Adam Calhoun joins us in asking "What even is the purpose of neuroscience???" and discussing the extent to which mice do or do not see like humans.
E45: How Working Memory Works
59:20Working memory is the ability to keep something in mind several seconds after it's gone. Neurons don't tend to keep firing when their input is removed, so how does the brain hold on to information when it's out of sight? Scientists have been probing this question for decades. On this episode, we talk about how working memory is studied and the traditional view of how it works, which includes elevated persistent firing rates in neurons in the prefrontal cortex. The traditional view, however, is being challenged in many ways at the moment. As evidence of that we read a "dueling" paper on the topic, which argues for a view that incorporates bursts of firing, oscillations, and synaptic changes. In addition to covering the experimental evidence for different views, we also talk about the many computational models of working memory that have been developed over the years. Throughout we talk about energy efficiency, the difference between maintenance and manipulation, and the effects of putting scientific disagreements in writing. We also admit to not reading *any* primary sources.
E44: Can a Biologist Fix a Radio?
1:05:15In 2002, cancer biologist Yuri Lazebnik raised and addressed the semi-facetious question "Can a biologist fix a radio?" in a short paper. The paper is a critique of current practices in the biological sciences, claiming they are inefficient at getting to truth. We discuss the stages of research progress in biological science Yuri describes, including the "paradoxical" stage where more facts leads to less understanding. We then dive into his view of how a biologist would approach a radio: describing what its parts look like, lesioning some of them, and making claims about what's necessary for the radio to work as a result. We reflect on how this framing of common biological research practices impacts our view of them and highlights how hard it is to understand complex systems. We talk about the (in)adequacy of Yuri's proposed solution to the problem (that biologists need to embrace formal, quantitative language) and the difference between engineering and science. Finally, we discuss a new take on this paper that goes through the effort of actually applying neuroscience methods to a microprocessor and the conclusions we took from that. Throughout we bring in specific examples from neuroscience we find relevant and Josh dismisses almost everything as "satirical".
E43: What Are Glia Up to?
1:05:07Despite the fact that the brain is full of them, glial cells don't get much attention from neuroscientists. The traditional view of these non-neurons is that they are supportive cells---there to silently help neurons do what they need to do. On this episode we start by describing this traditional view, including types of glial cells and their roles. Then we get into the more interesting stuff. How do glia communicate with each other and with neurons? Turns out there are many chemical messages that get sent between these different cell types, including via the energy molecule ATP! We then talk about the ways in which these messages impact neurons and reasons why the role of glia may be hard for neuroscientists to see. In particular, glia seem to have a lot to say about the birth and control of synapses, making them important for scientists interested in learning. Finally we cover some of the diseases related to glia, such as multiple sclerosis and (surprisingly) depression. Throughout, we ask if glia are important for computation, and relatedly, how the hell do we define computation? Also Grace is weirded out that glia are everywhere but nobody is talking about (or drawing) them.
E42: Learning Rules, Biological vs. Artificial
1:02:30For decades, neuroscientists have explored the ways in which neurons update and control the strength of their connections. For slightly fewer decades, machine learning researchers have been developing ways to train the connections between artificial neurons in their networks. The former endeavour shows us what happens in the brain and the latter shows us what's actually needed to make a system that works. Unfortunately, these two research directions have not settled on the same rules of learning. In this episode we will talk about the attempts to make artificial learning rules more biologically plausible in order to understand how the brain is capable of the powerful learning that it is. In particular, we focus on different models of biologically-plausible backpropagation---the standard method of training artificial neural networks. We start by explaining both backpropagation and biological learning rules (such as spike time dependent plasticity) and the ways in which the two differ. We then describe four different models that tackle how backpropagation could be done by the brain. Throughout, we talk dendrites and cell types and the role of other biological bits and bobs, and ask "should we actually expect to see backprop in the brain?". We end by discussing which of the four options we liked most and why!