PodcastsTecnologiaThe Daily AI Show

The Daily AI Show

The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran
The Daily AI Show
Último episódio

749 episódios

  • The Daily AI Show

    Is OpenAI Worth Nearly $1Trillion?

    01/04/2026 | 1h 5min
    Jyunmi Hatcher and Andy Halliday open with a run through major AI news, starting with the Claude Code leak and a LiteLLM supply-chain breach tied to Mercor. The conversation then moves through quantum computing risks to current encryption, quantum batteries, a proposed privacy lawsuit against Perplexity, Anthropic’s expanded Claude Code computer-use features, OpenAI’s massive new funding round, Bluesky’s AI feed builder, and Stanford research on AI sycophancy. Karl Yeh joins later for a discussion about Chinese local-government support for OpenClaw startups. The episode closes with an AI-and-science segment on self-driving labs and AI-powered robot scientists accelerating materials and drug discovery.

    Key Points Discussed

    00:01:07 Claude Code Leak and Anthropic Methods
    00:03:17 LiteLLM Supply-Chain Breach and AI Security
    00:07:10 Quantum Computing Threat to Encryption
    00:10:37 Quantum Batteries and Fast-Charging Possibilities
    00:20:58 Perplexity Tracking Lawsuit
    00:23:41 Claude Code Computer Use Expansion
    00:27:09 OpenAI’s $122 Billion Funding Round
    00:30:21 Bluesky’s Attie AI Feed Builder
    00:36:05 Stanford Study on AI Sycophancy
    00:42:39 China Incentives for OpenClaw Startups
    00:49:40 AI-Powered Robot Scientists and Self-Driving Labs

    The Daily AI Show Co Hosts: Jyunmi Hatcher, Andy Halliday, Beth Lyons, Karl Yeh
  • The Daily AI Show

    Claude Code Leak Sparks Debate

    31/03/2026 | 57min
    This episode centered on the reported Claude Code source leak and what it may reveal about Anthropic’s product advantage. The panel spent most of the show debating whether Claude’s real edge is in the terminal experience, how much that matters outside developer circles, and why AI builders should be more careful about hidden complexity and fragile internal tools. The second half shifted into multi-model workflows, including Codex plugins inside Claude Code and Microsoft’s new model-council approach. The show closed with a broader discussion about AI adoption narratives, especially around women, older workers, and who may actually be best positioned to benefit from the next wave.

    Key Points Discussed

    00:01:09 Claude Code source leak, compromised dependencies, and unreleased features
    00:07:15 Why the terminal experience may be Claude Code’s real “secret sauce”
    00:11:28 Why the leak matters beyond terminal users because Cloud Code powers other interfaces too
    00:13:42 Anne’s case for terminal use as a better way to build AI skill and control
    00:16:16 Brian’s warning about teams creating too many fragile internal AI tools without governance
    00:19:12 Using terminal through natural language instead of traditional command syntax
    00:22:58 Codex plugin inside Claude Code and the rise of multi-tool AI workflows
    00:24:15 Microsoft Copilot’s multi-model researcher using OpenAI plus Claude critique
    00:52:09 Comparing the “women are falling behind in AI” narrative with the “older workers are in their AI prime” narrative
    00:53:19 Why Anne argued women over fifty may be especially well positioned for AI adoption and influence

    The Daily AI Show Co Hosts: Brian Maucere, Beth Lyons, Andy Halliday, Anne Murphy
  • The Daily AI Show

    A Better Definition of AGI (Plus What Comes Next)

    30/03/2026 | 1h 3min
    This episode focused on where AI is heading as Q1 closed out, especially the shift from single frontier models toward specialized vertical systems and agent networks. The panel discussed Anthropic’s leaked Capybara model, Google’s TurboQuant breakthrough, Arc AGI-III, and why domain-specific AI may outperform general models in real work. The second half moved into practical demos and workflow trends, including Perplexity Computer, set-it-and-forget-it tasking, customer support AI, and lightweight tools for 3D creation. The overall theme was that AI progress now looks less like one model winning everything and more like coordinated systems getting better at specific jobs.

    Key Points Discussed

    00:00:47 Brian and Andy open with Perplexity Computer, internal AI training, and email workflow automation
    00:05:57 Tax optimization and liquidity planning with ChatGPT and Claude auditing
    00:08:02 The AI alignment film discussion and Dario Amodei’s new alignment essay
    00:09:22 Anthropic’s leaked Capybara model and why it may sit above Opus
    00:12:05 Google’s TurboQuant and the trend toward software-driven inference gains
    00:16:08 Cursor, vertical AI, and AEvolve for self-improving agent workflows
    00:19:24 Arc AGI-III and the case for AGI emerging from orchestrated agent systems
    00:26:32 FIN customer support as a leading example of domain-specific vertical AI
    00:31:50 Anthropic’s legal fight, growth surge, and Claude throttling discussion
    00:37:23 NotebookLM multitasking and the rise of set-it-and-forget-it AI tasks
    00:39:15 Meshi, MakerWorld, and easier AI-assisted 3D printing workflows
    00:41:35 MLB Scout and Gemini-based baseball analysis tools
    00:44:54 Perplexity Computer demo for travel and itinerary planning
    00:58:09 ChatGPT losing work after a Notion reconnect and the risks of fragile AI workflows

    The Daily AI Show Co Hosts: Brian Maucere, Andy Halliday
  • The Daily AI Show

    The Acoustic Trust Conundrum

    28/03/2026 | 27min
    Voice is losing its status as proof. A voicemail, a phone call, a video clip, a recorded meeting, any of it can now be fabricated well enough to fool ordinary people and, in some cases, trained professionals. That changes more than fraud risk. It changes the default social contract around speech. For a long time, hearing someone carried a baseline level of trust. Now every piece of audio starts under suspicion.
    That pressure creates a clear response. Build trust into the media itself. Signed audio. Provenance standards. Device-based identity. Verification layers that show where a recording came from and whether it was altered. Those tools solve a real problem. They give people a way to separate authentic speech from synthetic impersonation. But once those systems spread, they also start to change what counts as legitimate speech online. Verified audio gains status. Unverified audio loses it. Anonymous speech becomes harder to trust. Informal participation starts to look second-class.
    The Conundrum:
    As synthetic audio gets harder to distinguish from human speech, what should carry more weight, open participation or authenticated trust? One path puts more value on verified origin. Speech becomes more credible when identity and provenance travel with it. That would reduce fraud, protect reputation, and make high-stakes communication more reliable. The other path keeps speech more open and less tied to formal verification. That protects anonymity, lowers barriers to participation, and avoids turning everyday communication into an identity check. The stronger the trust layer becomes, the more power shifts toward the systems that issue and recognize trust. The weaker the trust layer becomes, the more everyday speech lives under doubt.
  • The Daily AI Show

    Google TurboQuant Changes Everything

    27/03/2026 | 44min
    This episode focused on how AI systems are getting more efficient, more agentic, and more practical. The first half centered on Google’s TurboQuant breakthrough, then shifted into portable AI skills, Codex, Claude, Gemini, and team workflow design. The second half moved through Meta’s new TRIBE V2 brain model, Google’s voice-first Gemini updates, Amazon’s robotics push, and the growing case for smaller specialized models instead of always using frontier systems.

    Key Points Discussed

    00:01:27 Google’s TurboQuant and why cheaper, faster inference could reshape AI infrastructure
    00:12:10 Building portable skills across Claude, Codex, and Gemini for real team workflows
    00:22:45 An unverified report about AI companies scanning and discarding books for training
    00:25:25 Meta’s TRIBE V2 brain model and virtual neuroscience from large-scale scan data
    00:33:19 Gemini 3.1 Flash live audio and Andy’s long-running vision for voice-first AI systems
    00:34:29 Google AI Studio, Firebase deployment, and building full application workflows inside Google’s stack
    00:40:03 Amazon’s robotics acquisition and what it could mean for warehouse humanoids
    00:41:43 Why smaller specialized models may beat frontier models for tasks like OCR and handwriting recognition

    The Daily AI Show Co Hosts: Brian Maucere, Andy Halliday

Mais podcasts de Tecnologia

Sobre The Daily AI Show

The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh
Site de podcast

Ouça The Daily AI Show, All-In with Chamath, Jason, Sacks & Friedberg e muitos outros podcasts de todo o mundo com o aplicativo o radio.net

Obtenha o aplicativo gratuito radio.net

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções
Informação legal
Aplicações
Social
v8.8.6| © 2007-2026 radio.de GmbH
Generated: 4/2/2026 - 8:16:33 AM