
3530: Candy Crush Accessibility Lessons From a 200 Million Player Game
24/12/2025 | 24min
If you have ever opened Candy Crush over the holidays without thinking about the design decisions behind every swipe, this episode offers a rare look behind the curtain. I sit down with Abigail Rindo, Head of Creative at King, to unpack how accessibility has evolved from a well-meaning afterthought into a core creative and commercial practice inside one of the world's most recognizable gaming studios. With more than 200 million people playing King's games each month, Abigail explains why inclusive design cannot be treated as charity or compliance, but as a responsibility that directly shapes product quality, player loyalty, and long-term growth. One of the moments that really stayed with me in this conversation is the data. More than a quarter of King's global player base self identifies as having an accessibility need. Even more players benefit from accessibility features without ever labeling themselves that way. Abigail shares how adjustments like customizable audio for tinnitus, reduced flashing to limit eye strain, and subtle interaction changes can quietly transform everyday play for millions of people. These are not edge cases. They are everyday realities for a massive audience that lives with these games as part of their daily routine. We also dig into how inclusive design sparks better creativity rather than limiting it. Abigail walks me through updates to Candy Crush Soda Saga, including the "hold and drag" mechanic that allows players to preview a move before committing. Inspired by the logic of holding a chess piece before placing it, this feature emerged directly from player research around visibility, dexterity, and comfort. It is a reminder that creative constraints, when grounded in real human needs, often lead to smarter and more elegant solutions. Beyond mechanics and metrics, this conversation goes deeper into storytelling, empathy, and team culture. Abigail explains why inclusive design only works when inclusive teams are involved from the start, and how global storytelling choices help King design worlds that resonate everywhere from Stockholm to Antarctica. We also talk about live service realities, blending quantitative data about what players do with qualitative insight into why they do it, especially when a game has been evolving for more than a decade.

3529: How Ping Identity Sees the Next Chapter of Digital Identity
23/12/2025 | 27min
What does it actually mean to prove who we are online in 2025, and why does it still feel so fragile? In this episode of Tech Talks Daily, I sit down with Alex Laurie from Ping Identity to talk about why digital identity has reached a real moment of tension in the UK. As more of our lives move online, from banking and healthcare to social platforms and government services, the gap between how identity should work and how it actually works keeps widening. Alex shares why the UK now feels out of step with other regions when it comes to online identity schemes, and how heavy reliance on centralized models is slowing adoption while weakening public trust. We spend time unpacking the practical consequences of today's verification systems. Age checks are regularly bypassed, fraud continues to grow, and users are often asked to hand over far more personal data than feels reasonable just to access everyday services. At the same time, public pressure around online safety is rising fast. That creates an uncomfortable push and pull between tighter controls and the expectation of fast, low-friction access. Alex makes the case that this tension exists because the underlying approach is flawed, and that proving something simple, like age, should never require revealing an entire digital identity. From there, the conversation turns to decentralized identity and why it is gaining momentum globally. Instead of placing sensitive data into large centralized databases, decentralized models allow individuals to hold and present verified credentials on their own terms. For me, this reframes digital identity as a right rather than a feature, and opens the door to systems that feel more privacy-aware, inclusive, and resilient. We also explore how agentic AI could play a role here, helping people manage, present, and protect their credentials intelligently without adding complexity or new risks. With fresh consumer research from Ping Identity informing the discussion, this episode looks closely at where trust, privacy, and identity are heading next, and why the choices made now will shape how we prove who we are online for years to come. Are we finally ready to rethink digital identity, and if so, what does that mean for all of us?

3528: How Boomi Thinks About Scaling AI Without Losing Control
22/12/2025 | 26min
What does it really mean to keep humans at the center of AI when agentic systems are accelerating faster than most organizations can govern them? At AWS re:Invent, I sat down with Michael Bachman from Boomi for a wide-ranging conversation that cut through the hype and focused on the harder questions many leaders are quietly asking. Michael leads technical and market research at Boomi, spending his time looking five to ten years ahead and translating future signals into decisions companies need to make today. That long view shaped a thoughtful discussion on human-centric AI, trust versus autonomy, and why governance can no longer be treated as an afterthought. As businesses rush toward agentic AI, swarms of autonomous systems, and large-scale automation, Michael shared why this moment makes him both optimistic and cautious. He explained why security, legal, and governance teams must be involved early, not retrofitted later, and why observability and sovereignty will become non-negotiable as agents move from experimentation into production. With tens of thousands of agents already deployed through Boomi, the stakes are rising quickly, and organizations that ignore guardrails today may struggle to regain control tomorrow. We also explored one of the biggest paradoxes of the AI era. The more capable these systems become, the more important human judgment and critical thinking are. Michael unpacked what it means to stay in the loop or on the loop, how trust in agentic systems should scale gradually, and why replacing human workers outright is often a short-term mindset that creates long-term risk. Instead, he argued that the real opportunity lies in amplifying human capability, enabling smaller teams to achieve outcomes that were previously out of reach. Looking further ahead, the conversation turned to the limits of large language models, the likelihood of an AI research reset, and why future breakthroughs may come from hybrid approaches that combine probabilistic models, symbolic reasoning, and new hardware architectures. Michael also reflected on how AI is changing how we search, learn, and think, and why fact-checking, creativity, and cognitive discipline matter more than ever as AI assistants become embedded in daily life. This episode offers a grounded, future-facing perspective on where AI is heading, why integration platforms are becoming connective tissue for modern systems, and how leaders can approach the next few years with both ambition and responsibility. Useful Links Learn More About Boomi Connect with Michael Bachman Algorithms to Live By: The Computer Science of Human Decisions Tech Talks Daily is sponsored by Denodo

3527: How AWS Is Building Trust Into Responsible AI Adoption
21/12/2025 | 27min
What does responsible AI really look like when it moves beyond policy papers and starts shaping who gets to build, create, and lead in the next phase of the digital economy? In this conversation recorded during AWS re:Invent, I'm joined by Diya Wynn, Principal for Responsible AI and Global AI Public Policy at Amazon Web Services. With more than 25 years of experience spanning the internet, e-commerce, mobile, cloud, and artificial intelligence, Diya brings a grounded and deeply human perspective to a topic that is often reduced to technical debates or regulatory headlines. Our discussion centers on trust as the real foundation for AI adoption. Diya explains why responsible AI is not about slowing innovation, but about making sure innovation reaches more people in meaningful ways. We talk about how standards and legislation can shape better outcomes when they are informed by real-world capabilities, and why education and skills development will matter just as much as model performance in the years ahead. We also explore how generative AI is changing access for underrepresented founders and creators. Drawing on examples from AWS programs, including work with accelerators, community organizations, and educational partners, Diya shares how tools like Amazon Bedrock and Amazon Q are lowering technical barriers so ideas can move faster from concept to execution. The conversation touches on why access without trust falls short, and why transparency, fairness, and diverse perspectives have to be part of how AI systems are designed and deployed. There's an honest look at the tension many leaders feel right now. AI promises efficiency and scale, but it also raises valid concerns around bias, accountability, and long-term impact. Diya doesn't shy away from those concerns. Instead, she explains how responsible AI practices inside AWS aim to address them through testing, documentation, and people-centered design, while still giving organizations the confidence to move forward. This episode is as much about the future of work and opportunity as it is about technology. It asks who gets to participate, who gets to benefit, and how today's decisions will shape tomorrow's innovation economy. As generative AI becomes part of everyday business life, how do we make sure responsibility, access, and trust grow alongside it, and what role do we each play in shaping that future? Useful Links Connect With Diya Wynn AWS Responsible AI Tech Talks Daily is sponsored by Denodo

3526: TinyMCE and the Human Side of Developer Experience
20/12/2025 | 31min
What does it really mean to support developers in a world where the tools are getting smarter, the expectations are higher, and the human side of technology is easier to forget? In this episode of Tech Talks Daily, I sit down with Frédéric Harper, Senior Developer Relations Manager at TinyMCE, for a thoughtful conversation about what it takes to serve developer communities with credibility, empathy, and long-term intent. With more than twenty years in the tech industry, Fred's career spans hands-on web development, open source advocacy, and senior DevRel roles at companies including Microsoft, Mozilla, Fitbit, and npm. That journey gives him a rare perspective on how developer needs have evolved, and where companies still get it wrong. We explore how starting out as a full-time developer shaped Fred's approach to advocacy, grounding his work in real-world frustration rather than abstract messaging. He reflects on earning trust during challenging periods, including advocating for open source during an era when some communities viewed large tech companies with deep skepticism. Along the way, Fred shares how studying Buddhist philosophy has influenced how he shows up for developers today, helping him keep ego in check and focus on service rather than status. The conversation also lifts the curtain on rich text editing, a capability most users take for granted but one that hides deep technical complexity. Fred explains why building a modern editing experience involves far more than formatting text, touching on collaboration, accessibility, security, and the growing expectations around AI-assisted workflows. It is a reminder that some of the most familiar parts of the web are also among the hardest to build well. We then turn to developer relations itself, a role that is often misunderstood or measured through the wrong lens. Fred shares why DevRel should never be treated as a short-term sales function, how trust and community take time, and why authenticity matters more than volume. From open source responsibility to personal branding for developers, including lessons from his book published with Apress, Fred offers grounded advice on visibility, communication, and staying human in an increasingly automated industry. As the episode closes, we reflect on burnout, boundaries, and inclusion, and why healthier communities lead to better products. For anyone building developer tools, managing technical communities, or trying to grow a career without losing themselves in the process, this conversation leaves a simple question hanging in the air: how do we build technology that supports people without forgetting the people behind the code? Useful Links Connect with Frédéric Harper Learn More About TinyMCE Tech Talks Daily is sponsored by Denodo



Tech Talks Daily