Eye On A.I.

Craig S. Smith
Eye On A.I.
Último episódio

320 episódios

  • Eye On A.I.

    #320 Carter Huffman: Exploring The Architecture Behind Modulate's Next-Gen Voice AI

    11/2/2026 | 1h 8min
    This episode is sponsored by tastytrade. 
    Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature.
     
    Learn more at https://tastytrade.com/



    Voice AI is moving far beyond transcription.
     
    In this episode, Carter Huffman, CTO and co-founder of Modulate, explains how real-time voice intelligence is unlocking something much bigger than speech-to-text. His team built AI that understands emotion, intent, deception, harassment, and fraud directly from live conversations. Not after the fact. Instantly.
     
    Carter shares how their technology powers ToxMod to moderate toxic behavior in online games at massive scale, analyzes millions of audio streams with ultra-low latency, and beats foundation models using an ensemble architecture that is faster, cheaper, and more accurate. We also explore voice deepfake detection, scam prevention, sentiment analysis for finance, and why voice might become the most important signal layer in AI.
     
    If you're building voice agents, working on AI safety, or curious where conversational AI is heading next, this conversation breaks down the technical and practical future of voice understanding.



    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) Real-Time Voice AI: Detecting Emotion, Intent & Lies
    (03:07) From MIT & NASA to Building Modulate
    (04:45) Why Voice AI Is More Than Just Transcription
    (06:14) The Toxic Gaming Problem That Sparked ToxMod
    (12:37) Inside the Tech: How "Ensemble Models" Beat Foundation Models
    (21:09) Achieving Ultra-Low Latency & Real-Time Performance
    (26:16) From Voice Skins to Fighting Harassment at Scale
    (37:31) Beyond Gaming: Fraud, Deepfakes & Voice Security
    (46:14) Privacy, Ethics & Voice Fingerprinting Risks
    (52:10) Lie Detection, Sentiment & Finance Use Cases
    (54:57) Opening the API: The Future of Voice Intelligence
  • Eye On A.I.

    #319 Subho Halder: Why Traditional App Security Fails in the Age of AI

    01/2/2026 | 57min
    This episode is sponsored by tastytrade. 
    Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature.


    Learn more at https://tastytrade.com/
     

    AI is changing how software is built, but it is also quietly breaking how security works.
     
    In this episode of Eye on AI, host Craig Smith sits down with Subho Halder, co-founder and CEO of Appknox, to unpack a growing and largely invisible risk. AI-powered mobile apps that look safe but are not.
     
    Subho explains how the explosion of ChatGPT-style app wrappers, agentic AI, and rapid app creation has transformed software from static code into living systems, and why traditional security models no longer hold up. From fake AI apps harvesting personal data to AI agents lowering the barrier for attackers, this conversation explores the real-world consequences of AI at scale.
     
    You will also hear why trust has become a core security metric, how app stores struggle to detect malicious behavior, and why developer burnout is rising as AI-generated code shifts risk downstream instead of removing it.
     
    This episode is essential listening for founders, developers, security leaders, and anyone building or relying on AI-powered applications.
     
    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) Why Mobile Apps Became a Massive Trust and Security Risk
    (02:45) Subho's Journey and the Birth of AppNox
    (06:17) Fake AI Apps, Malicious Wrappers, and Silent Data Theft
    (11:03) How Fake Apps Slip Past App Store Reviews
    (15:26) The Data Harvesting Business Model Behind Fake Apps
    (17:11) AI for Security vs Security for AI
    (22:16) Why Trust Is Becoming a Measurable AI Performance Metric
    (26:20) User Intent, Data Control, and Minimum Data Sharing
    (31:10) Trust, Governments, and Why Where AI Lives Matters
    (35:40) What AppNox Found in Retail App Security Audits
    (39:16) How AppNox Protects Apps at Scale
    (42:05) The Future of Security
  • Eye On A.I.

    #318 Olek Paraska: How AI Is Fixing the Biggest Bottleneck in Construction

    29/1/2026 | 53min
    Construction is one of the least digitized industries in the world, and not because it resists technology. It resists bad technology.
    In this episode of Eye on AI, Craig Smith sits down with Olek Paraska, CTO of Togal AI, to break down why construction productivity has barely improved in 50 years and why pre-construction is the real bottleneck holding the industry back.
    Olek explains how most estimating and takeoff work is still done manually, why automating this phase can unlock massive efficiency gains, and how AI works best in construction when it acts as a perception and reasoning layer rather than a replacement for human judgment.
    The conversation explores computer vision, agentic AI, human-in-the-loop systems, and why respecting real-world constraints is essential for AI to deliver real ROI. It also looks ahead to a future where floor plans, materials, costs, and constructability can be reasoned about together, long before construction begins.
    This episode is a deep dive into how AI can finally move construction forward by solving the right problems, in the right order.

    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Why Construction Is Desperate for Better AI
    (01:06) Olek's Path From Software to Construction
    (02:17) Why Construction Productivity Has Stalled for Decades
    (04:33) The Pre-Construction Bottleneck No One Talks About
    (06:17) How Takeoffs Are Still Done Manually
    (09:15) Why Construction Rejects Bad Technology
    (11:18) How Togal Found the Right Problem to Solve
    (12:14) From Computer Vision to Reasoning AI
    (17:44) What Agentic AI Looks Like in Pre-Construction
    (20:59) Turning Floor Plans Into Materials and Costs
    (28:18) The Real ROI of AI for Contractors
    (47:11) The Long-Term Vision for AI in Construction
  • Eye On A.I.

    #317 Steven Brown: Why Modern Medicine Needs AI-Assisted Decision Making

    25/1/2026 | 1h
    In this episode of the Eye on AI Podcast, Craig Smith sits down with Steve Brown, founder of CureWise, to explore how agentic AI is reshaping healthcare from the patient's perspective.
    Steve shares the deeply personal story behind CureWise, born out of his own experience with a rare cancer diagnosis that was repeatedly missed by traditional medical pathways. The conversation dives into why modern healthcare struggles with complex, edge-case conditions, how fragmented medical data and time-constrained systems fail patients, and where AI can meaningfully help without replacing clinicians.
    The discussion goes deep into multi-agent AI systems, reliability through consensus, large context windows, and how AI can surface better questions rather than premature answers. Steve explains why patient education is the real unlock for better outcomes, how precision medicine depends on individualized data and genetics, and why empowering patients leads to stronger collaboration with doctors.
    This episode offers a grounded, practical look at AI's role in healthcare, not as a diagnostic shortcut, but as a tool for clarity, context, and better decision-making in some of the most critical moments of car
     
    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) Using Multi-Agent AI to Analyze Medical Records
    (04:35) Steve Brown's Tech Background and Return to Healthcare
    (08:25) How a Rare Cancer Diagnosis Was Initially Missed
    (13:55) Why Modern Medicine Struggles With Complex Cases
    (18:29) Multi-Agent Consensus and AI Reliability in Healthcare
    (24:12) Large Context Windows, RAG, and Medical Data Organization
    (28:24) Why CureWise Focuses on Patient Education, Not Diagnosis
    (33:10) Precision Medicine, Genetics, and Personalized Treatment
    (47:45) Why CureWise Launches Direct-to-Patient First
    (53:19) The Future of AI-Driven Precision Medicine
  • Eye On A.I.

    #316 Robbie Goldfarb: Why the Future of AI Depends on Better Judgment

    23/1/2026 | 1h 3min
    AI is getting smarter, but now it needs better  judgment.
    In this episode of the Eye on AI Podcast, we speak with Robbie Goldfarb, former Meta product leader and co-founder of Forum AI, about why treating AI as a truth engine is one of the most dangerous assumptions in modern artificial intelligence.
    Robbie brings first-hand experience from Meta's trust and safety and AI teams, where he worked on misinformation, elections, youth safety, and AI governance. He explains why large language models shouldn't be treated as arbiters of truth, why subjective domains like politics, health, and mental health pose serious risks, and why more data does not solve the alignment problem.
    The conversation breaks down how AI systems are evaluated today, how engagement incentives create sycophantic and biased models, and why trust is becoming the biggest barrier to real AI adoption. Robbie also shares how Forum AI is building expert-driven AI evaluation systems that scale human judgment instead of crowd labels, and why transparency about who trains AI matters more than ever.
    This episode explores AI safety, AI trust, model evaluation, expert judgment, mental health risks, misinformation, and the future of responsible AI deployment.
    If you are building, deploying, regulating, or relying on AI systems, this conversation will fundamentally change how you think about intelligence, truth, and responsibility.

    Want to know more about Forum AI?
    Website: https://www.byforum.com/
    X: https://x.com/TheForumAI
    LinkedIn: https://www.linkedin.com/company/byforum/
    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) Why Treating AI as a "Truth Engine" Is Dangerous
    (02:47) What Forum AI Does and Why Expert Judgment Matters
    (06:32) How Expert Thinking Is Extracted and Structured
    (09:40) Bias, Training Data, and the Myth of Objectivity in AI
    (14:04) Evaluating AI Through Consequences, Not Just Accuracy
    (18:48) Who Decides "Ground Truth" in Subjective Domains
    (24:27) How AI Models Are Actually Evaluated in Practice
    (28:24) Why Quality of Experts Beats Scale in AI Evaluation
    (36:33) Trust as the Biggest Bottleneck to AI Adoption
    (45:01) What "Good Judgment" Means for AI Systems
    (49:58) The Risks of Engagement-Driven AI Incentives
    (54:51) Transparency, Accountability, and the Future of AI

Mais podcasts de Tecnologia

Sobre Eye On A.I.

Eye on A.I. is a biweekly podcast, hosted by longtime New York Times correspondent Craig S. Smith. In each episode, Craig will talk to people making a difference in artificial intelligence. The podcast aims to put incremental advances into a broader context and consider the global implications of the developing technology. AI is about to change your world, so pay attention.
Site de podcast

Ouça Eye On A.I., Área de Transferência e muitos outros podcasts de todo o mundo com o aplicativo o radio.net

Obtenha o aplicativo gratuito radio.net

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções
Informação legal
Aplicações
Social
v8.5.0 | © 2007-2026 radio.de GmbH
Generated: 2/12/2026 - 4:49:39 AM