PodcastsTecnologiaIndustry40.tv

Industry40.tv

Kudzai Manditereza
Industry40.tv
Último episódio

85 episódios

  • Industry40.tv

    Agentic AI Framework for Manufacturing Operations: Gilad Langer - Head of Digital Manufacturing Practice, Tulip Interfaces

    22/10/2025 | 1h 1min

    The promise of AI agents in manufacturing is about creating systems that can actually adapt when your supply chain gets disrupted, when a machine fails, or when customer demand shifts overnight. But here's the problem: without a clear framework, you end up with AI pilots across different parts of the plant, each solving local problems, none of them working together. A collection of disconnected bots, overlapping efforts, and a governance nightmare. ‍ Gilad Langer, Head of Digital Manufacturing Practice at Tulip Interfaces, has spent 30 years, starting with his PhD research in the 1990s on multi-agent systems, working on this exact problem. His recent framework for Composable Agentic AI in Manufacturing Operations offers a fundamentally different approach to data architecture and governance. More importantly, it provides a practical path forward for organizations trapped between their legacy systems and the promise of AI-driven operations. ‍ Why Manufacturing Needs An Agentic AI Framework Manufacturing operations are what systems scientists call "complex adaptive systems", they share more in common with traffic patterns and weather systems than they do with customer service chatbots. These systems are inherently chaotic, but not in a bad way. They have patterns, and those patterns can be influenced. Think about the Toyota Production System. Toyota figured out decades ago that manufacturing behaves like a complex system. Their insight? Don't try to control everything from the top down. Instead, create simple rules that prevent the system from spiraling into bad patterns. Pull instead of push to reduce bottlenecks. Remove obstacles immediately through on-demand problem solving. Create flow rather than fighting against the natural dynamics of the system. This matters because AI agents work the same way. Each agent is a discrete entity following its own goals, working autonomously but interacting with others. When you put multiple agents together, you get another complex adaptive system. And here's where it gets interesting: if you use a complex adaptive system (your AI agents) to manage a complex adaptive system (your manufacturing operations), you can get the best of both worlds—adaptability plus control. But only if you have the right framework. ‍ A Data Architecture for AI Agents in Manufacturing Before you can deploy agents effectively, you need to solve a fundamental data problem. Traditional manufacturing data models are too complicated. They try to capture everything, the physical objects, the transactions, the relationships, the history, all in rigid database structures that require a data scientist to interpret. The Artifact Model takes a different approach. Walk into any manufacturing facility and ask: what do we actually have here? You'll get a surprisingly short list: Physical artifacts: machines, tools, rooms, areas, materials, work-in-progress, finished products. Things you can touch. Operational artifacts: orders, defects, tasks, events, schedules. Things you do with or to the physical stuff. That's it. Every manufacturing plant, regardless of industry, operates with roughly 10-12 types of artifacts. A CNC machine and a testing device? They're 80% the same from a data perspective. Different specific attributes, sure, but the core structure is identical. When your operators, engineers, and agents can all look at the same data structure and immediately understand what they're seeing, you've solved the democratization problem. No more waiting weeks for someone to write a custom query or generate a report. The complexity of your data model should never exceed the complexity of what you're actually making. This means your agents have a shared vocabulary. A machine agent knows how to find its maintenance history. A product agent can query its quality parameters. A schedule agent understands which resources are available. They're all working from the same playbook. That's it. Every manufacturing operation, regardless of complexity, boils down to these categories. Most facilities have fewer than 10 distinct physical artifact types and a similar number of operational artifact types. Here's why this matters: Simplicity enables democratization. When your data model reflects the actual shop floor rather than abstract database optimization, engineers and operators can understand it. They can build agents. They can govern data quality. You're not the bottleneck anymore. Templates enable scale. Yes, a CNC machine and a test stand are different. But 80% of their attributes are identical—location, status, maintenance history, performance metrics. You create a common template for "machines" with specific extensions. Your artifact model grows organically but stays manageable. Relationships become intuitive. Instead of complex foreign key relationships, you have natural connections—this material is processed by this machine, this task is part of this order. Knowledge graphs build themselves. AI agents understand context without complex joins. History separates from structure. The artifact model defines what things are, not what happened to them. All the transactional data—your UNS streams, your historian data, your event logs—links to artifacts by ID. Agents can pull their entire history when needed without bloating the core model. This is fundamentally different from trying to make traditional MES or ERP data models work with AI. Those systems were designed when data storage was expensive and computing power was limited. The artifact model assumes modern capabilities—cheap storage, fast queries, and AI that can make sense of unstructured history. ‍ Types of AI Agents in Manufacturing Operations With your data foundation in place, you can deploy agents strategically. The framework identifies four categories, each serving a specific purpose: Physical Agents represent actual objects on your shop floor: Machine agents monitor equipment health, track performance metrics like OEE, and predict failures before they happen Product agents follow individual units through production, maintaining quality data and genealogy Tote agents track material movement, making it trivial to find components and maintain traceability Operational Agents manage workflow and respond to events: Order agents oversee entire production orders from start to finish, tracking progress and material consumption Deviation agents activate when something goes wrong, classifying issues and triggering appropriate responses Schedule agents dynamically adjust production plans based on real-time conditions System Agents handle integration with your existing infrastructure: ERP agents manage the data flow between your production platform and enterprise systems UNS agents enable real-time data exchange across your entire operational landscape Data lake agents ensure production data flows to your analytics systems for model training and insights Device agents connect sensors, scanners, and instruments seamlessly Staff Agents augment human capabilities: Quality research agents help operators find documentation and troubleshooting steps instantly App builder agents generate templates and suggest structures, accelerating development for citizen developers The key insight: these agent types align with your Artifact Model. A machine agent isn't trying to understand everything about your plant—it's focused on one machine, represented consistently in your data layer. This bounded scope is what makes agents practical and safe. ‍ Practical Implementation of Agentic AI in Manufacturing‍ The biggest challenge isn't technical, it's cultural and organizational. Manufacturing leaders face a paradox: how do you govern a system designed for bottom-up emergence without crushing the adaptability that makes it valuable? The answer comes from understanding composability. True composable systems have five characteristics: bottom-up development, iterative improvement, lean operations, democratized creation, and human-centric design. Your governance framework needs to enable these characteristics, not fight them. Start Absurdly Small Don't create a plant-wide governance framework before you've deployed a single agent. Pick one critical machine that causes frequent disruptions. Put sensors on it. Create an artifact record. Build one agent that helps operators understand when it's about to fail. This takes hours, not months. Learn what governance you actually need from this first implementation. Maybe it's "who can create agents for critical equipment?" Maybe it's "what data sources can agents access?" You don't know until you do it. Build Governance Capabilities as Patterns Emerge After three or four agent deployments, you'll see patterns: Certain types of agents are universally useful (create templates) Some data sources need access controls (implement security) Agent interactions need logging (add observability) Some agents need human approval before action (build workflow) Each governance capability solves a real problem you've experienced, not a theoretical concern. This keeps governance lean and relevant. Focus on Agent Quality, Not Agent Count Traditional metrics ask "how many systems have we deployed?" Agentic systems need different measures: How quickly can operators get answers from agents? Do agents have access to the data they need? Are agent recommendations being followed or ignored? When agents fail, how fast do we detect and respond? You're governing a living system, not managing a project portfolio. Embrace the Timeline: Hours for Impact, Months for Scale If someone asks how long it takes to connect a machine and deploy an agent that delivers value, the answer should be "hours." If they ask how long to transform plant-wide operations, the answer is "many months of iteration." This is the opposite of traditional implementations where you spend months in design before seeing any value. The time investment shifts from up-front planning to continuous improvement. Mandate Platform Composability Here's the hard truth: you cannot do this with traditional MES, QMS, or ERP systems. Those platforms were built for the opposite philosophy—centralized control, up-front design, change management. Trying to retrofit them for agentic AI is like trying to convert a mainframe into a cloud-native microservices platform. Use Denga's test: Ask if your platform supports bottom-up development, truly democratizes content creation, enables lean iteration, and maintains human agency. If the answer to any of these is "well, with some customization..." you're fighting the wrong battle. The platform question isn't about vendor preference—it's about architectural compatibility. Your platform needs to be designed for agent-based operations from the ground up. ‍ Conclusion The shift to agentic AI in manufacturing isn't primarily a technology challenge, it's a data architecture and governance challenge. The hard questions aren't about which AI models to use, but about: How do we structure data for agent autonomy while maintaining system coherence? How do we govern bottom-up creation without losing control? How do we measure system health when behavior is emergent rather than designed? How do we train organizations to think in agents rather than applications? These are exactly the kinds of strategic questions data and analytics leaders need to answer. The frameworks exist. The technology is ready. The question is whether your data architecture can support it. Start with one machine, one agent, one use case. Learn what your organization actually needs rather than what you think it needs. Build your governance framework from real experience, not theoretical concerns. And most importantly, accept that the goal isn't to design the perfect system up-front, it's to create a system that gets better every day through emergence and adaptation. That's how nature builds systems that survive and thrive through constant change. That's how manufacturing needs to work in an unpredictable world. And that's the opportunity for data leaders who are willing to rethink their fundamental assumptions about architecture and governance. ‍  

  • Industry40.tv

    Building a Knowledge Graph Context Layer for Industrial A: Bob van de Kuilen - Director, Thred

    17/9/2025 | 54min

    Context isn't static.   It's a living layer of knowledge built through problem-solving, conversation, and understanding the complex relationships on the factory floor.   This simple truth is often overlooked in industrial data strategies.    We’ve been conditioned to believe that context can be predefined; baked into standards, taxonomies, and hierarchies.   But in real-world manufacturing, things change, people think differently, and use cases evolve.   So how can we build this dynamic layer of understanding for industrial AI?   In our latest AI in Manufacturing episode, I spoke with Bob van de Kuilen, Director at Thred, about a more human-centric approach to industrial data contextualisation using Knowledge Graphs.   Thred is a tool that plugs into Ignition Platform, enabling users to visualize their factory assets in a knowledge graph, link related data points, embed domain expertise, and deliver structured, contextualized data to AI and analytics tools.   We discuss:  ✅ Why traditional approaches to data context often fail  ✅ Knowledge Graphs act as a mind map for data  ✅ The practical steps to building context  ✅ How this new context layer serves as the perfect foundation for AI agents.

  • Industry40.tv

    Standardizing Industrial Data Architecture with ISA-95: Jeroen Janssen - MES/MOM Consultant, Rhize

    10/9/2025 | 54min

    SA-95 is a standard that’s often misunderstood, but incredibly powerful. While many think ISA-95 is rigid or overly complex, it actually enables flexibility by: ⇨ 𝐃𝐞𝐟𝐢𝐧𝐢𝐧𝐠 𝐚 𝐬𝐡𝐚𝐫𝐞𝐝 𝐯𝐨𝐜𝐚𝐛𝐮𝐥𝐚𝐫𝐲 for manufacturing concepts, creating a true ontology for your data. ⇨ 𝐂𝐫𝐞𝐚𝐭𝐢𝐧𝐠 𝐬𝐜𝐚𝐥𝐚𝐛𝐥𝐞 𝐩𝐥𝐚𝐜𝐞𝐡𝐨𝐥𝐝𝐞𝐫𝐬 for every type of data, so you can start small and add new use cases later without rebuilding everything. ⇨ 𝐏𝐫𝐨𝐯𝐢𝐝𝐢𝐧𝐠 𝐭𝐡𝐞 "𝐰𝐡𝐲" 𝐛𝐞𝐡𝐢𝐧𝐝 𝐞𝐯𝐞𝐧𝐭𝐬, not just the "what," giving crucial context to your analytics and AI models. But how do you move from theory to a practical, modern implementation? In our latest AI in Manufacturing podcast episode, we explore exactly that with ISA-95 expert Jeroen Janssen, who is an MES/MOM Consultant at Rhize Manufacturing Data Hub. In the episode, you’ll learn: ✅ How to overcome a data culture that creates so many silos. ✅ The "use case stacking" method for a phased, value-driven implementation. ✅ What a native ISA-95 data hub looks like and how a graph database can bring it to life. ✅ Why this standardized approach is the key to unlo

  • Industry40.tv

    Information Management and AI in Modern Manufacturing: Jeff Knepper - President, Flow Software

    03/9/2025 | 1h 5min

    Is the Timebase free historian getting an AI-Native DataOps component with Knowledge Graphs capability? You’ll hear it here first. In the latest episode of the AI in Manufacturing podcast, I sit down with Jeff Knepper, President at Flow Software Inc., to discuss the intersection of Information Management and AI in modern manufacturing, plus the exciting announcement of Timebase Atlas launch.   Here’s some of what we cover in this episode:   ✅ Why manufacturers struggle to make use of their data ✅ Building reliable pipelines for AI-driven use cases ✅ AI Agents in Manufacturing – Where they fit and what they need ✅ Unified Analytics Framework vs. Unified Namespace ✅ Historization Strategies – Best practices from edge to cloud ✅ Timebase Atlas Launch Announcement: Data Modeling, Pipelines, Knowledge Graphs, and AI interfaces ✅ MCP and Flow AI Gateway: Beyond APIs to Context-Aware Agent Interfaces

  • Industry40.tv

    Time-Series Data Quality and Reliability for Manufacturing AI: Bert Baeck - Co-Founder and CEO, Timeseer.AI

    27/8/2025 | 52min

    Most data-quality initiatives focus on things like freshness or schema. That works for IT data, but not for sensor data. Sensor data is different. It reflects physics. To trust it, you need contextual, physics-aware checks. That means spotting: → Impossible jumps → Flatlines (long quiet periods) → Oscillations → Broken causal patterns (e.g., valve opens → flow should increase) It’s no surprise that poor data quality is one of the biggest reasons manufacturers struggle to scale AI initiatives. This isn’t just data science, it’s operations science. Think of data quality as infrastructure: a trust layer between your OT data sources and your AI tools. Making that real requires four building blocks: 1. 𝐒𝐜𝐨𝐫𝐢𝐧𝐠 – Physics-aware anomaly rules, baselines 2. 𝐌𝐨𝐧𝐢𝐭𝐨𝐫𝐢𝐧𝐠 – Continuous validation at the right cadence (real-time or daily) 3. 𝐂𝐥𝐞𝐚𝐧𝐢𝐧𝐠 & 𝐕𝐚𝐥𝐢𝐝𝐚𝐭𝐢𝐨𝐧 – Auto-fix what you can; escalate what you can’t 4. 𝐔𝐧𝐢𝐟𝐨𝐫𝐦𝐢𝐳𝐚𝐭𝐢𝐨𝐧 & 𝐒𝐋𝐀𝐬 – Define “good enough” and enforce it before data is consumed Why it matters: ✅ Data teams – Less cleansing, faster delivery ✅ AI models – Reliable inputs = repeatable results ✅ Ops teams – Catch failing sensors before downtime ✅ Business – Avoid safety incidents, billing errors, bad decisions In the latest episode of the AI in Manufacturing podcast, I sat down with Bert Baeck, Co-Founder of Timeseer.AI, to discuss time-series data quality and reliability strategies for AI in manufacturing applications.

Mais podcasts de Tecnologia

Sobre Industry40.tv

Each episode of Industry40.tv Podcast will treat you to an in-depth interview with leading AI practitioners, exploring the Application of Artificial Intelligence in Manufacturing and offering practical guidance for successful implementation.
Site de podcast

Ouça Industry40.tv, IA Sob Controle - Inteligência Artificial e muitos outros podcasts de todo o mundo com o aplicativo o radio.net

Obtenha o aplicativo gratuito radio.net

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções
Informação legal
Aplicações
Social
v8.2.0 | © 2007-2025 radio.de GmbH
Generated: 12/17/2025 - 6:34:49 PM