Powered by RND
PodcastsNegóciosTraining Data

Training Data

Sequoia Capital
Training Data
Último episódio

Episódios Disponíveis

5 de 58
  • OpenAI’s IMO Team on Why Models Are Finally Solving Elite-Level Math
    In just two months, a scrappy three-person team at OpenAI sprinted to fulfill what the entire AI field has been chasing for years—gold-level performance on the International Mathematical Olympiad problems. Alex Wei, Sheryl Hsu and Noam Brown discuss their unique approach using general-purpose reinforcement learning techniques on hard-to-verify tasks rather than formal verification tools. The model showed surprising self-awareness by admitting it couldn’t solve problem six, and revealed the humbling gap between solving competition problems and genuine mathematical research breakthroughs. Hosted by Sonya Huang, Sequoia Capital
    --------  
    30:10
  • OpenAI Just Released ChatGPT Agent, Its Most Powerful Agent Yet
    Isa Fulford, Casey Chu, and Edward Sun from OpenAI's ChatGPT agent team reveal how they combined Deep Research and Operator into a single, powerful AI agent that can perform complex, multi-step tasks lasting up to an hour. By giving the model access to a virtual computer with text browsing, visual browsing, terminal access, and API integrations—all with shared state—they've created what may be the first truly embodied AI assistant. The team discusses their reinforcement learning approach, safety mitigations for real-world actions, and how small teams can build transformative AI products through close research-applied collaboration. Hosted by Sonya Huang and Lauren Reeder, Sequoia Capital
    --------  
    37:36
  • DeepMind's Pushmeet Kohli on AI's Scientific Revolution
    Pushmeet Kohli leads AI for Science at DeepMind, where his team has created AlphaEvolve, an AI system that discovers entirely new algorithms and proves mathematical results that have eluded researchers for decades. From improving 50-year-old matrix multiplication algorithms to generating interpretable code for complex problems like data center scheduling, AlphaEvolve represents a new paradigm where LLMs coupled with evolutionary search can outperform human experts. Pushmeet explains the technical architecture behind these breakthroughs and shares insights from collaborations with mathematicians like Terence Tao, while discussing how AI is accelerating scientific discovery across domains from chip design to materials science. Hosted by Sonya Huang and Pat Grady, Sequoia Capital
    --------  
    41:13
  • Mapping the Mind of a Neural Net: Goodfire’s Eric Ho on the Future of Interpretability
    Eric Ho is building Goodfire to solve one of AI’s most critical challenges: understanding what’s actually happening inside neural networks. His team is developing techniques to understand, audit and edit neural networks at the feature level. Eric discusses breakthrough results in resolving superposition through sparse autoencoders, successful model editing demonstrations and real-world applications in genomics with Arc Institute's DNA foundation models. He argues that interpretability will be critical as AI systems become more powerful and take on mission-critical roles in society. Hosted by Sonya Huang and Roelof Botha, Sequoia Capital Mentioned in this episode: Mech interp: Mechanistic interpretability, list of important papers here Phineas Gage: 19th century railway engineer who lost most of his brain’s left frontal lobe in an accident. Became a famous case study in neuroscience. Human Genome Project: Effort from 1990-2003 to generate the first sequence of the human genome which accelerated the study of human biology Emergent Misalignment: Narrow finetuning can produce broadly misaligned LLMs Zoom In: An Introduction to Circuits: First important mechanistic interpretability paper from OpenAI in 2020 Superposition: Concept from physics applied to interpretability that allows neural networks to simulate larger networks (e.g. more concepts than neurons) Apollo Research: AI safety company that designs AI model evaluations and conducts interpretability research Towards Monosemanticity: Decomposing Language Models With Dictionary Learning. 2023 Anthropic paper that uses a sparse autoencoder to extract interpretable features; followed by Scaling Monosemanticity Under the Hood of a Reasoning Model: 2025 Goodfire paper that interprets DeepSeek’s reasoning model R1 Auto-interpretability: The ability to use LLMs to automatically write explanations for the behavior of neurons in LLMs Interpreting Evo 2: Arc Institute's Next-Generation Genomic Foundation Model. (see episode with Arc co-founder Patrick Hsu) Paint with Ember: Canvas interface from Goodfire that lets you steer an LLM’s visual output  in real time (paper here) Model diffing: Interpreting how a model differs from checkpoint to checkpoint during finetuning Feature steering: The ability to change the style of LLM output by up or down weighting features (e.g. talking like a pirate vs factual information about the Andromeda Galaxy) Weight based interpretability: Method for directly decomposing neural network parameters into mechanistic components, instead of using features The Urgency of Interpretability: Essay by Anthropic founder Dario Amodei On the Biology of a Large Language Model: Goodfire collaboration with Anthropic
    --------  
    47:07
  • ElevenLabs’ Mati Staniszewski: Why Voice Will Be the Fundamental Interface for Tech
    Mati Staniszewski, co-founder and CEO of ElevenLabs, explains how staying laser-focused on audio innovation has allowed his company to thrive despite the push into multimodality from foundation models. From a high school friendship in Poland to building one of the fastest-growing AI companies, Mati shares how ElevenLabs transformed text-to-speech with contextual understanding and emotional delivery. He discusses the company's viral moments (from Harry Potter by Balenciaga to powering Darth Vader in Fortnite), and explains how ElevenLabs is creating the infrastructure for voice agents and real-time translation that could eliminate language barriers worldwide. Hosted by: Pat Grady, Sequoia Capital Mentioned in this episode: Attention Is All You Need: The original Transformers paper Tortoise-tts: Open source text to speech model that was a starting point for ElevenLabs (which now maintains a v2) Harry Potter by Balenciaga: ElevenLabs’ first big viral moment from 2023 The first AI that can laugh: 2022 blog post backing up ElevenLab’s claim of laughter (it got better in v3) Darth Vader's voice in Fortnite: ElevenLabs used actual voice clips provided by James Earl Jones before he died Lex Fridman interviews Prime Minister Modi: ElevenLabs enabled Fridman to speak in Hindi and Modi to speak in English. Time Person of the Year 2024: ElevenLabs-powered experiment with “conversational journalism” Iconic Voices: Richard Feynman, Deepak Chopra, Maya Angelou and more available in ElevenLabs reader app SIP trunking: a method of delivering voice, video, and other unified communications over the internet using the Session Initiation Protocol (SIP) Genesys: Leading enterprise CX platform for agentic AI Hitchhiker’s Guide to the Galaxy: Comedy/science-fiction series by Douglas Adams that contains the concept of the Babel Fish instantaneous translator, cited by Mati FYI: communication and productivity app for creatives that Mati uses, founded by will.i.am Lovable: prototyping app that Mati loves
    --------  
    59:53

Mais podcasts de Negócios

Sobre Training Data

Join us as we train our neural nets on the theme of the century: AI. Sonya Huang, Pat Grady and more Sequoia Capital partners host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies—and their implications for technology, business and society. The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.
Site de podcast

Ouça Training Data, Economia Falada e muitos outros podcasts de todo o mundo com o aplicativo o radio.net

Obtenha o aplicativo gratuito radio.net

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções

Training Data: Podcast do grupo

Aplicações
Social
v7.22.0 | © 2007-2025 radio.de GmbH
Generated: 7/30/2025 - 11:07:21 AM