PodcastsTecnologiaThe Daily AI Show

The Daily AI Show

The Daily AI Show Crew - Brian, Beth, Jyunmi, Andy, Karl, and Eran
The Daily AI Show
Último episódio

729 episódios

  • The Daily AI Show

    AI Built a Brain on a Chip?

    09/03/2026 | 1h 2min
    Andy, Beth, and Brian open with a wide-ranging discussion on neuromorphic computing, including fruit fly connectomes, biological neurons on chips, and what those advances could mean for future AI systems. The conversation then moves to Andrej Karpathy’s Auto Research project, AI-assisted app building, and Microsoft’s decision to bring Anthropic’s co-work capabilities into Copilot. Later, the hosts discuss labor disruption, Google Search’s evolving position in an AI-first world, and a Harvard Business Review piece on “AI brain fry.” The episode closes on the tension between AI productivity gains and the cognitive fatigue that can come from constantly supervising parallel AI workstreams.

    Key Points Discussed

    00:00:18 Show open and Monday setup

    00:01:27 Neuromorphic computing and neurons on chips

    00:14:02 Andrej Karpathy’s Auto Research agents

    00:22:02 Microsoft adds Anthropic co-work to Copilot

    00:33:16 Tech layoffs and entry-level hiring pressure

    00:34:35 Google Search, Liz Reid, and agent-driven web use

    00:44:39 Harvard Business Review on AI brain fry

    The Daily AI Show Co Hosts: Andy Halliday, Beth Lyons, Brian Maucere
  • The Daily AI Show

    The Catharsis Loop Conundrum

    07/03/2026 | 23min
    Public agencies and large service centers sit on a constant backlog of frustration. Benefits, healthcare claims, school bureaucracy, billing disputes, outages, policy confusion. Demand keeps rising while staffing and training lag. AI changes the interface first. Organizations now deploy “empathetic buffer layers,” agents tuned to listen, reflect emotion, summarize the issue, and guide next steps. They respond instantly, stay calm, and carry a conversation longer than any overworked human rep. For many people, that matters. A parent trying to fix a school placement issue at 9:30 pm or a patient staring at an insurance denial needs clarity and emotional steadiness more than another hold queue.

    The problem is that this new interface does more than reduce wait times. It absorbs heat. It turns anger into a managed conversation, then routes the case into the same slow back-end. Over time, leaders can point to “improved customer satisfaction” while the underlying system stays broken. The pain still exists, but the feedback stops looking like pain. Complaints become neatly structured tickets, and public outrage becomes private venting. The system gets calmer without getting better.

    The conundrum:

    When institutions deploy AI that excels at emotional de-escalation, are they reducing harm, or delaying reform?

    One argument says the buffer is a legitimate upgrade. People should not have to suffer psychological damage to prove the system failed them. A calmer interface lowers conflict, reduces threats and burnout for frontline staff, improves compliance with next steps, and helps more cases reach resolution. In this view, you do not withhold empathy as a governance tool. You treat it as basic service quality.

    The other argument says the buffer changes what leaders perceive. If the AI converts raw frustration into polite, contained conversations, then institutions lose the pressure signals that drive investment and redesign. The organization learns to optimize for “felt experience” while ignoring root causes, because the visible cost of failure drops. In this view, the buffer becomes a release valve that protects the institution more than the citizen.

    So what should society demand from these systems: an interface designed to reduce human stress even if it softens the force for change, or an interface designed to preserve truthful pressure even if it leaves people exposed to the full emotional cost of institutional failure?
  • The Daily AI Show

    GPT 5.4 vs Gemini: Benchmarks, Codex, Excel

    06/03/2026 | 56min
    Beth Lyons and Andy Halliday open the show with a focused breakdown of GPT-5.4, framing it less as a universal leap and more as a strong advance in white-collar knowledge work and real-world task performance. Much of the conversation compares GPT-5.4 with Gemini 3.1 Pro Preview, Claude models, Codex, and other systems across benchmarks like GPT-Val, coding, long-context reasoning, hallucination resistance, and visual reasoning, with repeated emphasis that users still need to pick models based on the actual job to be done. Beth also shares a practical complaint about Gemini hallucinating around silent screen recordings and uses that to argue for a more dependable “colleague layer” in agentic systems. Later, Karl Yeh joins to talk through hands-on experience with GPT-5.4 in Codex, comparisons with Claude in Excel and Gemini in Sheets, and where the new release feels genuinely useful in day-to-day work.

    Key Points Discussed

    00:00:18 Welcome and setup for a GPT-5.4-focused episode
    00:02:47 GPT-Val and white-collar knowledge work framing
    00:08:51 Benchmark comparison across GPT-5.4, Claude, Gemini, and others
    00:16:26 Gemini strengths in video and visual reasoning
    00:18:05 Beth’s Gemini transcription / hallucination workflow example
    00:23:54 “Then we’ll move to more news” and handoff to Karl Yeh
    00:24:24 Karl Yeh on real-world use cases over benchmarks
    00:55:30 Closing recommendations: try GPT-5.4, use Codex, newsletter and community plug

    The Daily AI Show Co Hosts: Beth Lyons, Andy Halliday, Karl Yeh
  • The Daily AI Show

    AI Bugs, Swarms, and “God’s Eye”

    06/03/2026 | 57min
    The hosts briefly touch the latest twist in the Anthropic / Pentagon / OpenAI narrative, including discussion around a reported internal memo and how the story keeps evolving. They then move into creator/tooling news: Seed Dance (AI video) pricing and what low-cost generation could mean for production workflows. The conversation shifts to Alibaba’s Qwen small-model releases (agentic capabilities on-device) and the surprise departures of key Qwen leaders afterward. Later, they discuss Perplexity Computer updates (including “skills”), an “Anything API” product idea, and a “God’s eye view” visualization that leads into a weird-but-serious segment on swarms and bio-cyborg insects before closing out.

    Key Points Discussed

    00:00:18 Welcome + Andy’s back (Karl may pop in)
    00:01:39 Anthropic renews Pentagon AI deal + memo talk (quick touch, then move on)
    00:07:19 AI video: Seed Dance / ByteDance pricing + implications for production
    00:17:21 Alibaba Qwen small models + leadership departures discussion begins
    00:23:49 Perplexity Computer momentum + “skills” and workflow-style reuse
    00:35:31 Gemini “gems” workflow + tooling habits (recurring instructions)
    00:36:44 Anything API: turning browser actions into callable API endpoints
    00:39:45 “God’s eye view” project + operation replay discussion
    00:51:30 Swarm / “AI bugs” + cockroach / biotactics thread
    00:56:55 Wrap-up + links will be dropped in the community Slack

    The Daily AI Show Co Hosts: Andy Halliday, Beth Lyons, Brian Maucere, Karl Yeh
  • The Daily AI Show

    Midjourney Woes and Deepseek V4 Buzz

    04/03/2026 | 1h 37min
    Episode 673 opens with updates on the ongoing Anthropic / OpenAI / DoD situation, including discussion of autonomous systems, decision-speed, and military targeting concepts like “kill chain” vs “kill web.” The hosts then pivot into open-source model anticipation around DeepSeek V4, plus practical creator-tool chatter on MidJourney’s status and ecosystem shifts. They close the news with a quick note on GPT-5.3 Instant behavior changes, then transition to an “AI in science” segment on AI-powered digital twins for real-time tsunami early warning.

    Key Points Discussed

    00:00:17 Welcome + what’s ahead (Anthropic/OpenAI/DoD + tsunami modeling)
    00:03:46 “Okay, the Anthropic thing…” framing the ongoing controversy
    00:16:00 Autonomous systems + “kill chain” vs faster “kill web” discussion
    00:21:34 “Before we jump in… the next story…” DeepSeek V4 timing + hype
    00:28:12 Million-token context windows + what “memory” should mean
    00:32:00 Brian’s “curiosity news” on MidJourney: where are they now?
    00:37:00 “That sounds like a job for OpenClaw” (data portability / skills)
    00:39:56 “Can I share one more news story…” GPT-5.3 Instant example
    00:48:04 “As we wrap up the news…” handoff to next segment
    00:59:02 “Now it’s time for AI in science” tsunami early warning digital twins
    01:22:18 Tangent: new Mac Studio M5 Ultra + self-hosting ambitions
    01:27:34 “We gotta wrap up this conversation…” jobs/measurement + future follow-up
    01:36:53 Closing thanks + community plug + sign-off line

    The Daily AI Show Co Hosts: Jyunmi Hatcher, Brian Maucere, Beth Lyons

Mais podcasts de Tecnologia

Sobre The Daily AI Show

The Daily AI Show is a panel discussion hosted LIVE each weekday at 10am Eastern. We cover all the AI topics and use cases that are important to today's busy professional. No fluff. Just 45+ minutes to cover the AI news, stories, and knowledge you need to know as a business professional. About the crew: We are a group of professionals who work in various industries and have either deployed AI in our own environments or are actively coaching, consulting, and teaching AI best practices. Your hosts are: Brian Maucere Beth Lyons Andy Halliday Eran Malloch Jyunmi Hatcher Karl Yeh
Site de podcast

Ouça The Daily AI Show, Área de Transferência e muitos outros podcasts de todo o mundo com o aplicativo o radio.net

Obtenha o aplicativo gratuito radio.net

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções
Informação legal
Aplicações
Social
v8.7.2 | © 2007-2026 radio.de GmbH
Generated: 3/10/2026 - 7:20:57 AM