Powered by RND
PodcastsEnsinoAI Engineering Podcast

AI Engineering Podcast

Tobias Macey
AI Engineering Podcast
Último episódio

Episódios Disponíveis

5 de 60
  • Revolutionizing Production Systems: The Resolve AI Approach
    SummaryIn this episode of the AI Engineering Podcast, CEO of Resolve AI Spiros Xanthos shares his insights on building agentic capabilities for operational systems. He discusses the limitations of traditional observability tools and the need for AI agents that can reason through complex systems to provide actionable insights and solutions. The conversation highlights the architecture of Resolve AI, which integrates with existing tools to build a comprehensive understanding of production environments, and emphasizes the importance of context and memory in AI systems. Spiros also touches on the evolving role of AI in production systems, the potential for AI to augment human operators, and the need for continuous learning and adaptation to fully leverage these advancements.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Spiros Xanthos about architecting agentic capabilities for operational challenges with managing production systems.InterviewIntroductionHow did you get involved in machine learning?Can you describe what Resolve AI is and the story behind it?We have decades of experience as an industry in managing operational complexity. What are the critical failures in capabilities that you are addressing with the application of AI?Given the existing capabilities of dedicated platforms (e.g. Grafana, PagerDuty, Splunk, etc), what is your reasoning for building a new system vs. a new feature of existing operational product?Over the past couple of years the industry has developed a growing number of agent patterns. What was your approach in evaluating and selecting a particular approach for your product?One of the complications of building any platform that supports operational needs of engineering teams is the complexity of integrating with their technology stack. This is doubly true when building an AI system that needs rich context. What are the core primitives that you are relying on to build a robust offering?How are you managing the learning process for your systems to allow for iterative discovery and improvement?What are your strategies for personalizing those discoveries to a given customer and operating environment?One of the interesting challenges in agentic systems is managing the user experience for human-in-the-loop and machine to human handoffs in each direction. How are you thinking about that, especially given the criticality of the systems that you are interacting with?As more of the code that is running in production environments is co-developed with AI, what impact do you anticipate on the overall operational resilience of the systems being monitored?One of the challenges of working with LLMs is the cold start problem where every conversation starts from scratch. How are you approaching the overall problem of context engineering and ensuring that you are consistently providing the necessary information for the model to be effective in its role?What are the most interesting, innovative, or unexpected ways that you have seen Resolve AI used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on Resolve AI?When is Resolve AI the wrong choice?What do you have planned for the future of Resolve AI?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used. The AI Engineering Podcast is your guide to the fast-moving world of building AI systems.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksResolve AISplunkOpenTelemetrySplunk ObservabilityContext EngineeringGrafanaKubernetesPagerDutyThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    51:01
  • Designing Scalable AI Systems with FastMCP: Challenges and Innovations
    SummaryIn this episode of the AI Engineering Podcast Jeremiah Lowin, founder and CEO of Prefect Technologies, talks about the FastMCP framework and the design of MCP servers. Jeremiah explains the evolution of FastMCP, from its initial creation as a simpler alternative to the MCP SDK to its current role in facilitating the deployment of AI tools. The discussion covers the complexities of designing MCP servers, the importance of context engineering, and the potential pitfalls of overwhelming AI agents with too many tools. Jeremiah also highlights the importance of simplicity and incremental adoption in software design, and shares insights into the future of MCP and the broader AI ecosystem. The episode concludes with a look at the challenges of authentication and authorization in AI applications and the exciting potential of MCP as a protocol for the future of AI-driven business logic.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Jeremiah Lowin about the FastMCP framework and how to design and build your own MCP serversInterviewIntroductionHow did you get involved in machine learning?Can you start by describing what MCP is and its purpose in the ecosystem of AI applications?What is FastMCP and what motivated you to create it?Recognizing that MCP is relatively young, how would you characterize the landscape of MCP frameworks?What are some of the stumbling blocks on the path to building a well engineered MCP server?What are the potential ramifications of poorly designed and implemented MCP implementations?In the overall context of an AI-powered/agentic application, what are the tradeoffs of investing in the MCP protocol? (e.g. engineering effort, process isolation, tool creation, auth(n|z), etc.)In your experience, what are the architectural patterns that you see of MCP implementation and usage?There are a multitude of MCP servers available for a variety of use cases. What are the key factors that someone should be using to evaluate their viability for a production use case?Can you give an overview of the key characteristics of FastMCP and why someone might select it as their implementation target for a custom MCP server?How have the design, scope, and goals of the project evolved since you first started working on it?For someone who is using FastMCP as the framework for creating their own AI tools, what are some of the design considerations or best practices that they should be aware of?What are some of the ways that someone might consider integrating FastMCP into their existing Python-powered web applications (e.g. FastAPI, Django, Flask, etc.)As you continue to invest your time and energy into FastMCP, what is your overall goal for the project?What are the most interesting, innovative, or unexpected ways that you have seen FastMCP used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on FastMCP?When is FastMCP the wrong choice?What do you have planned for the future of FastMCP?Contact InfoLinkedInGitHubParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksFastMCPFastMCP CloudPrefectModel Context Protocol (MCP)AI ToolsFastAPIPython DecoratorWebsocketsSSE == Server-Sent EventsStreamable HTTPOAuthMCP GatewayMCP SamplingFlaskDjangoASGIMCP ElicitationAuthKitDynamic Client RegistrationsmolagentsLarge Active ModelsA2AThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    1:13:57
  • Proactive Monitoring in Heavy Industry: The Role of AI and Human Curiosity
    SummaryIn this episode of the AI Engineering Podcast Dr. Tara Javidi, CTO of KavAI, talks about developing AI systems for proactive monitoring in heavy industry. Dr. Javidi shares her background in mathematics and information theory, influenced by Claude Shannon's work, and discusses her approach to curiosity-driven AI that mimics human curiosity to improve data collection and predictive analytics. She explains how KavAI's platform uses generative AI models to enhance industrial monitoring by addressing informational blind spots and reducing reliance on human oversight. The conversation covers the architecture of KavAI's systems, integrating AI with existing workflows, building trust with operators, and the societal impact of AI in preventing environmental catastrophes, ultimately highlighting the future potential of information-centric AI models.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systems.Your host is Tobias Macey and today I'm interviewing Dr. Tara Javidi about building AI systems for proactive monitoring of physical environments for heavy industryInterviewIntroductionHow did you get involved in machine learning?Can you describe what KavAI is and the story behind it?What are some of the current state-of-the-art applications of AI/ML for monitoring and accident prevention in industrial environments?What are the shortcomings of those approaches?What are some examples of the types of harm that you are focused on preventing or mitigating with your platform?On your site it mentions that you have created a foundation model for physical awareness. What are some examples of the types of predictive/generative capabilities that your model provides?A perennial challenge when building any digital model of a physical system is the lack of absolute fidelity. What are the key sources of information acquisition that you rely on for your platform?In addition to your foundation model, what are the other systems that you incorporate to perform analysis and catalyze action?Can you describe the overall system architecture of your platform?What are some of the ways that you are able to integrate learnings across industries and environments to improve the overall capacity of your models?What are the most interesting, innovative, or unexpected ways that you have seen KavAI used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on KavAI?When is KavAI/Physical AI the wrong choice?What do you have planned for the future of KavAI?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?LinksKavAIInformation TheoryClaude ShannonThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    40:57
  • Navigating the AI Landscape: Challenges and Innovations in Retail
    SummaryIn this episode of the AI Engineering Podcast machine learning engineer Shashank Kapadia explores the transformative role of generative AI in retail. Shashank shares his journey from an engineering background to becoming a key player in ML, highlighting the excitement of understanding human behavior at scale through AI. He discusses the challenges and opportunities presented by generative AI in retail, where it complements traditional ML by enhancing explainability and personalization, predicting consumer needs, and driving autonomous shopping agents and emotional commerce. Shashank elaborates on the architectural and operational shifts required to integrate generative AI into existing systems, emphasizing orchestration, safety nets, and continuous learning loops, while also addressing the balance between building and buying AI solutions, considering factors like data privacy and customization.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Shashank Kapadia about applications of generative AI in retailInterviewIntroductionHow did you get involved in machine learning?Can you summarize the main applications of generative AI that you are seeing the most benefit from in retail/ecommerce?What are the major architectural patterns that you are deploying for generative AI workloads?Working at an organization like WalMart, you already had a substantial investment in ML/MLOps. What are the elements of that organizational capability that remain the same, and what are the catalyzed changes as a result of generative models?When working at the scale of Walmart, what are the different types of bottlenecks that you encounter which can be ignored at smaller orders of magnitude?Generative AI introduces new risks around brand reputation, accuracy, trustworthiness, etc. What are the architectural components that you find most effective in managing and monitoring the interactions that you provide to your customers?Can you describe the architecture of the technical systems that you have built to enable the organization to take advantage of generative models?What are the human elements that you rely on to ensure the safety of your AI products?What are the most interesting, innovative, or unexpected ways that you have seen generative AI break at scale?What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI?When is generative AI the wrong choice?What are your paying special attention to over the next 6 - 36 months in AI?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksWalmart LabsThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    52:09
  • The Anti-CRM CRM: How Spiro Uses AI to Transform Sales
    SummaryIn this episode of the AI Engineering podcast Adam Honig, founder of Spiro AI, about using AI to automate CRM systems, particularly in the manufacturing sector. Adam shares his journey from running a consulting company focused on Salesforce to founding Spiro, and discusses the challenges of traditional CRM systems where data entry is often neglected. He explains how Spiro addresses this issue by automating data collection from emails, phone calls, and other communications, providing a rich dataset for machine learning models to generate valuable insights. Adam highlights how Spiro's AI-driven CRM system is tailored to the manufacturing industry's unique needs, where sales are relationship-driven rather than funnel-based, and emphasizes the importance of understanding customer interactions and order histories to predict future business opportunities. The conversation also touches on the evolution of AI models, leveraging powerful third-party APIs, managing context windows, and platform dependencies, with Adam sharing insights into Spiro's future plans, including product recommendations and dynamic data modeling approaches.AnnouncementsHello and welcome to the AI Engineering Podcast, your guide to the fast-moving world of building scalable and maintainable AI systemsYour host is Tobias Macey and today I'm interviewing Adam Honig about using AI to automate CRM maintenanceInterviewIntroductionHow did you get involved in machine learning?Can you describe what Spiro is and the story behind it?What are the specific challenges posed by the manufacturing industry with regards to sales and customer interactions?How does the type of manufacturing and target customer influence the level of effort and communication involved in the sales and customer service cycles?Before we discuss the opportunities for automation, can you describe the typical interaction patterns and workflows involved in the care and feeding of CRM systems?Spiro has been around since 2014, long pre-dating the current era of generative models. What were your initial targets for improving efficiency and reducing toil for your customers with the aid of AI/ML?How have the generational changes of deep learning and now generative AI changed the ways that you think about what is possible in your product?Generative models reduce the level of effort to get a proof of concept for language-oriented workflows. How are you pairing them with more narrow AI that you have built?Can you describe the overall architecture of your platform and how it has evolved in recent years?While generative models are powerful, they can also become expensive, and the costs are hard to predict. How are you thinking about vendor selection and platform risk in the application of those models?What are the opportunities that you see for the adoption of more autonomous applications of language models in your product? (e.g. agents)What are the confidence building steps that you are focusing on as you investigate those opportunities?What are the most interesting, innovative, or unexpected ways that you have seen Spiro used?What are the most interesting, unexpected, or challenging lessons that you have learned while working on AI in the CRM space?When is AI the wrong choice for CRM workflows?What do you have planned for the future of Spiro?Contact InfoLinkedInParting QuestionFrom your perspective, what are the biggest gaps in tooling, technology, or training for AI systems today?Closing AnnouncementsThank you for listening! Don't forget to check out our other shows. The Data Engineering Podcast covers the latest on modern data management. Podcast.__init__ covers the Python language, its community, and the innovative ways it is being used.Visit the site to subscribe to the show, sign up for the mailing list, and read the show notes.If you've learned something or tried out a project from the show then tell us about it! Email [email protected] with your story.To help other people find the show please leave a review on iTunes and tell your friends and co-workers.LinksSpiroDeepgramCognee EpisodeAgentic MemoryGraphRAGPodcast EpisodeOpenAI Assistant APIThe intro and outro music is from Hitman's Lovesong feat. Paola Graziano by The Freak Fandango Orchestra/CC BY-SA 3.0
    --------  
    46:48

Mais podcasts de Ensino

Sobre AI Engineering Podcast

This show is your guidebook to building scalable and maintainable AI systems. You will learn how to architect AI applications, apply AI to your work, and the considerations involved in building or customizing new models. Everything that you need to know to deliver real impact and value with machine learning and artificial intelligence.
Site de podcast

Ouça AI Engineering Podcast, 6 Minute English e muitos outros podcasts de todo o mundo com o aplicativo o radio.net

Obtenha o aplicativo gratuito radio.net

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções

AI Engineering Podcast: Podcast do grupo

Aplicações
Social
v7.23.7 | © 2007-2025 radio.de GmbH
Generated: 9/13/2025 - 6:46:19 AM