PodcastsNegóciosTechsplainers by IBM

Techsplainers by IBM

IBM
Techsplainers by IBM
Último episódio

31 episódios

  • Techsplainers by IBM

    What is vibe coding?

    19/12/2025 | 7min

    This episode of Techsplainers introduces vibe coding, the practice of using AI tools to generate software code through natural language prompts rather than manual coding. We explore how this approach follows a "code first, refine later" philosophy that prioritizes experimentation and rapid prototyping. The podcast walks through the four-step implementation process: choosing an AI coding assistant platform, defining requirements through clear prompts, refining the generated code, and reviewing before deployment. While highlighting vibe coding's ability to accelerate development and free human creativity, we also examine its limitations—including challenges with technical complexity, code quality, debugging, maintenance, and security concerns. The discussion concludes by examining how vibe coding is driving paradigm shifts in software development through quick prototyping, problem-first approaches, reduced risk with maximized impact, and multimodal interfaces that combine voice, visual, and text-based coding methods to create more intuitive development environments. Find more information at https://www.ibm.com/think/podcasts/techsplainersNarrated by Amanda Downie

  • Techsplainers by IBM

    What is retrieval augmented generation (RAG)?

    18/12/2025 | 10min

    This episode of Techsplainers explores retrieval augmented generation (RAG), a powerful technique that enhances generative AI by connecting models to external knowledge bases. We examine how RAG addresses critical limitations of large language models—their finite training data and knowledge cutoffs—by allowing them to access up-to-date, domain-specific information in real-time. The podcast breaks down RAG's five-stage process: from receiving a user query to retrieving relevant information, integrating it into an augmented prompt, and generating an informed response. We dissect RAG's four core components—knowledge base, retriever, integration layer, and generator—explaining how they work together to create a more robust AI system. Special attention is given to embedding and chunking processes that transform unstructured data into searchable vector representations. The episode highlights RAG's numerous benefits, including cost efficiency compared to fine-tuning, reduced hallucinations, enhanced user trust through citations, expanded model capabilities, improved developer control, and stronger data security. Finally, we showcase diverse real-world applications across industries, from specialized chatbots and research tools to personalized recommendation engines. Find more information at https://www.ibm.com/think/podcasts/techsplainers Narrated by Amanda Downie

  • Techsplainers by IBM

    What are vision language models (VLMs)?

    17/12/2025 | 10min

    This episode of Techsplainers explores vision language models (VLMs), the sophisticated AI systems that bridge computer vision and natural language processing. We examine how these multimodal models understand relationships between images and text, allowing them to generate image descriptions, answer visual questions, and even create images from text prompts. The podcast dissects the architecture of VLMs, explaining the critical components of vision encoders (which process visual information into vector embeddings) and language encoders (which interpret textual data). We delve into training strategies, including contrastive learning methods like CLIP, masking techniques, generative approaches, and transfer learning from pretrained models. The discussion highlights real-world applications—from image captioning and generation to visual search, image segmentation, and object detection—while showcasing leading models like DeepSeek-VL2, Google's Gemini 2.0, OpenAI's GPT-4o, Meta's Llama 3.2, and NVIDIA's NVLM. Finally, we address implementation challenges similar to traditional LLMs, including data bias, computational complexity, and the risk of hallucinations. Find more information at https://www.ibm.com/think/podcasts/techsplainersNarrated by Amanda Downie

  • Techsplainers by IBM

    What are large language models (LLMs)?

    16/12/2025 | 10min

    This episode of Techsplainers explores large language models (LLMs), the powerful AI systems revolutionizing how we interact with technology through human language. We break down how these massive statistical prediction machines are built on transformer architecture, enabling them to understand context and relationships between words far better than previous systems. The podcast walks through the complete development process—from pretraining on trillions of words and tokenization to self-supervised learning and the crucial self-attention mechanism that allows LLMs to capture linguistic relationships. We examine various fine-tuning methods, including supervised fine-tuning, reinforcement learning from human feedback (RLHF), and instruction tuning, that help adapt these models for specific uses. The discussion covers practical aspects like prompt engineering, temperature settings, context windows, and retrieval augmented generation (RAG) while showcasing real-world applications across industries. Finally, we address the significant challenges of LLMs, including hallucinations, biases, and resource demands, alongside governance frameworks and evaluation techniques used to ensure these powerful tools are deployed responsibly. Find more information at https://www.ibm.com/think/podcasts/techsplainersNarrated by Amanda Downie

  • Techsplainers by IBM

    What is generative AI?

    15/12/2025 | 10min

    This episode of Techsplainers explores generative AI, the revolutionary technology that creates original content like text, images, video, and code in response to user prompts. We walk through how these systems work in three main phases: training foundation models on massive datasets, tuning them for specific applications, and continuously improving their outputs through evaluation. The podcast traces the evolution of key generative AI architectures—from variational autoencoders and generative adversarial networks to diffusion models and transformers—highlighting how each contributes to today's powerful AI tools. We examine generative AI's diverse applications across industries, from enhancing customer experiences and accelerating software development to transforming creative processes and scientific research. The episode also addresses emerging concepts like AI agents and agentic AI while candidly discussing the technology's challenges, including hallucinations, bias, security vulnerabilities, and deepfakes. Despite these concerns, the episode emphasizes how organizations are increasingly adopting generative AI, with analysts predicting 80% implementation by 2026. Find more information at https://www.ibm.com/think/podcasts/techsplainersNarrated by Amanda Downie

Mais podcasts de Negócios

Sobre Techsplainers by IBM

Introducing the Techsplainers by IBM podcast, your new podcast for quick, powerful takes on today’s most important AI and tech topics. Each episode brings you bite-sized learning designed to fit your day, whether you’re driving, exercising, or just curious for something new. This is just the beginning. Tune in every weekday at 6 AM ET for fresh insights, new voices, and smarter learning.
Site de podcast

Ouça Techsplainers by IBM, Braincast e muitos outros podcasts de todo o mundo com o aplicativo o radio.net

Obtenha o aplicativo gratuito radio.net

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções

Techsplainers by IBM: Podcast do grupo

Informação legal
Aplicações
Social
v8.2.1 | © 2007-2025 radio.de GmbH
Generated: 12/20/2025 - 12:21:59 AM