Powered by RND
PodcastsCiênciaBrain Inspired

Brain Inspired

Paul Middlebrooks
Brain Inspired
Último episódio

Episódios Disponíveis

5 de 99
  • BI 209 Aran Nayebi: The NeuroAI Turing Test
    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. Aran Nayebi is an Assistant Professor at Carnegie Mellon University in the Machine Learning Department. He was there in the early days of using convolutional neural networks to explain how our brains perform object recognition, and since then he's a had a whirlwind trajectory through different AI architectures and algorithms and how they relate to biological architectures and algorithms, so we touch on some of what he has studied in that regard. But he also recently started his own lab, at CMU, and he has plans to integrate much of what he has learned to eventually develop autonomous agents that perform the tasks we want them to perform in similar at least ways that our brains perform them. So we discuss his ongoing plans to reverse-engineer our intelligence to build useful cognitive architectures of that sort. We also discuss Aran's suggestion that, at least in the NeuroAI world, the Turing test needs to be updated to include some measure of similarity of the internal representations used to achieve the various tasks the models perform. By internal representations, as we discuss, he means the population-level activity in the neural networks, not the mental representations philosophy of mind often refers to, or other philosophical notions of the term representation. Aran's Website. Twitter: @ayan_nayebi. Related papers Brain-model evaluations need the NeuroAI Turing Test. Barriers and pathways to human-AI alignment: a game-theoretic approach. 0:00 - Intro 5:24 - Background 20:46 - Building embodied agents 33:00 - Adaptability 49:25 - Marr's levels 54:12 - Sensorimotor loop and intrinsic goals 1:00:05 - NeuroAI Turing Test 1:18:18 - Representations 1:28:18 - How to know what to measure 1:32:56 - AI safety
    --------  
    1:43:59
  • BI 208 Gabriele Scheler: From Verbal Thought to Neuron Computation
    Support the show to get full episodes, full archive, and join the Discord community. Gabriele Scheler co-founded the Carl Correns Foundation for Mathematical Biology. Carl Correns was her great grandfather, one of the early pioneers in genetics. Gabriele is a computational neuroscientist, whose goal is to build models of cellular computation, and much of her focus is on neurons. We discuss her theoretical work building a new kind of single neuron model. She, like Dmitri Chklovskii a few episodes ago, believes we've been stuck with essentially the same family of models for a neuron for a long time, despite minor variations on those models. The model Gabriele is working on, for example, respects the computations going on not only externally, via spiking, which has been the only game in town forever, but also the computations going on within the cell itself. Gabriele is in line with previous guests like Randy Gallistel, David Glanzman, and Hessam Akhlaghpour, who argue that we need to pay attention to how neurons are computing various things internally and how that affects our cognition. Gabriele also believes the new neuron model she's developing will improve AI, drastically simplifying the models by providing them with smarter neurons, essentially. We also discuss the importance of neuromodulation, her interest in wanting to understand how we think via our internal verbal monologue, her lifelong interest in language in general, what she thinks about LLMs, why she decided to start her own foundation to fund her science, what that experience has been like so far. Gabriele has been working on these topics for many years, and as you'll hear in a moment, she was there when computational neuroscience was just starting to pop up in a few places, when it was a nascent field, unlike its current ubiquity in neuroscience. Gabriele's website. Carl Correns Foundation for Mathematical Biology. Neuro-AI spinoff Related papers Sketch of a novel approach to a neural model. Localist neural plasticity identified by mutual information. Related episodes BI 199 Hessam Akhlaghpour: Natural Universal Computation BI 172 David Glanzman: Memory All The Way Down BI 126 Randy Gallistel: Where Is the Engram? 0:00 - Intro 4:41 - Gabriele's early interests in verbal thinking 14:14 - What is thinking? 24:04 - Starting one's own foundation 58:18 - Building a new single neuron model 1:19:25 - The right level of abstraction 1:25:00 - How a new neuron would change AI
    --------  
    1:35:08
  • BI 207 Alison Preston: Schemas in our Brains and Minds
    Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership. Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released. To explore more neuroscience news and perspectives, visit thetransmitter.org. The concept of a schema goes back at least to the philosopher Immanuel Kant in the 1700s, who use the term to refer to a kind of built-in mental framework to organize sensory experience. But it was the psychologist Frederic Bartlett in the 1930s who used the term schema in a psychological sense, to explain how our memories are organized and how new information gets integrated into our memory. Fast forward another 100 years to today, and we have a podcast episode with my guest today, Alison Preston, who runs the Preston Lab at the University of Texas at Austin. On this episode, we discuss her neuroscience research explaining how our brains might carry out the processing that fits with our modern conception of schemas, and how our brains do that in different ways as we develop from childhood to adulthood. I just said, "our modern conception of schemas," but like everything else, there isn't complete consensus among scientists exactly how to define schema. Ali has her own definition. She shares that, and how it differs from other conceptions commonly used. I like Ali's version and think it should be adopted, in part because it helps distinguish schemas from a related term, cognitive maps, which we've discussed aplenty on brain inspired, and can sometimes be used interchangeably with schemas. So we discuss how to think about schemas versus cognitive maps, versus concepts, versus semantic information, and so on. Last episode Ciara Greene discussed schemas and how they underlie our memories, and learning, and predictions, and how they can lead to inaccurate memories and predictions. Today Ali explains how circuits in the brain might adaptively underlie this process as we develop, and how to go about measuring it in the first place. Preston Lab Twitter: @preston_lab Related papers: Concept formation as a computational cognitive process. Schema, Inference, and Memory. Developmental differences in memory reactivation relate to encoding and inference in the human brain. Read the transcript. 0:00 - Intro 6:51 - Schemas 20:37 - Schemas and the developing brain 35:03 - Information theory, dimensionality, and detail 41:17 - Geometry of schemas 47:26 - Schemas and creativity 50:29 - Brain connection pruning with development 1:02:46 - Information in brains 1:09:20 - Schemas and development in AI
    --------  
    1:29:47
  • Quick Announcement: Complexity Group
    Here's the link to learn more and sign up: Complexity Group Email List.
    --------  
    6:47
  • BI 206 Ciara Greene: Memories Are Useful, Not Accurate
    Support the show to get full episodes, full archive, and join the Discord community. Ciara Greene is Associate Professor in the University College Dublin School of Psychology. In this episode we discuss Ciara's book Memory Lane: The Perfectly Imperfect Ways We Remember, co-authored by her colleague Gillian Murphy. The book is all about how human episodic memory works and why it works the way it does. Contrary to our common assumption, a "good memory" isn't necessarily highly accurate - we don't store memories like files in a filing cabinet. Instead our memories evolved to help us function in the world. That means our memories are flexible, constantly changing, and that forgetting can be beneficial, for example. Regarding how our memories work, we discuss how memories are reconstructed each time we access them, and the role of schemas in organizing our episodic memories within the context of our previous experiences. Because our memories evolved for function and not accuracy, there's a wide range of flexibility in how we process and store memories. We're all susceptible to misinformation, all our memories are affected by our emotional states, and so on. Ciara's research explores many of the ways our memories are shaped by these various conditions, and how we should better understand our own and other's memories. Attention and Memory Lab Twitter: @ciaragreene01. Book: Memory Lane: The Perfectly Imperfect Ways We Remember Read the transcript. 0:00 - Intro 5:35 - The function of memory 6:41 - Reconstructive nature of memory 13:50 - Memory schemas, highly superior autobiographical memory 20:49 - Misremembering and flashbulb memories 27:52 - Forgetting and schemas 36:06 - What is a "good" memory? 39:35 - Memories and intention 43:47 - Memory and context 49:55 - Implanting false memories 1:04:10 - Memory suggestion during interrogations 1:06:30 - Memory, imagination, and creativity 1:13:45 - Artificial intelligence and memory 1:21:21 - Driven by questions
    --------  
    1:29:10

Mais podcasts de Ciência

Sobre Brain Inspired

Neuroscience and artificial intelligence work better together. Brain inspired is a celebration and exploration of the ideas driving our progress to understand intelligence. I interview experts about their work at the interface of neuroscience, artificial intelligence, cognitive science, philosophy, psychology, and more: the symbiosis of these overlapping fields, how they inform each other, where they differ, what the past brought us, and what the future brings. Topics include computational neuroscience, supervised machine learning, unsupervised learning, reinforcement learning, deep learning, convolutional and recurrent neural networks, decision-making science, AI agents, backpropagation, credit assignment, neuroengineering, neuromorphics, emergence, philosophy of mind, consciousness, general AI, spiking neural networks, data science, and a lot more. The podcast is not produced for a general audience. Instead, it aims to educate, challenge, inspire, and hopefully entertain those interested in learning more about neuroscience and AI.
Site de podcast

Ouça Brain Inspired, Os Caminhos de Niéde Guidon e muitos outros podcasts de todo o mundo com o aplicativo o radio.net

Obtenha o aplicativo gratuito radio.net

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções
Aplicações
Social
v7.15.0 | © 2007-2025 radio.de GmbH
Generated: 4/15/2025 - 1:15:43 PM