Powered by RND
PodcastsTecnologiaExperiencing Data w/ Brian T. O’Neill (UX for AI Products, Analytics SAAS and Data Product Management)
Ouça Experiencing Data w/ Brian T. O’Neill  (UX for AI Products, Analytics SAAS and Data Product Management) na aplicação
Ouça Experiencing Data w/ Brian T. O’Neill (UX for AI Products, Analytics SAAS and Data Product Management) na aplicação
(1 200)(249 324)
Guardar rádio
Despertar
Sleeptimer

Experiencing Data w/ Brian T. O’Neill (UX for AI Products, Analytics SAAS and Data Product Management)

Podcast Experiencing Data w/ Brian T. O’Neill  (UX for AI Products, Analytics SAAS and Data Product Management)
Brian T. O’Neill from Designing for Analytics
If you’re a leader tasked with generating business and org. value through ML/AI and analytics, you’ve probably struggled with low user adoption. Making the tec...

Episódios Disponíveis

5 de 100
  • 163 - It’s Not a Math Problem: How to Quantify the Value of Your Enterprise Data Products or Your Data Product Management Function
    I keep hearing data product, data strategy, and UX teams often struggle to quantify the value of their work. Whether it’s as a team as a whole or on a specific data product initiative, the underlying problem is the same – your contribution is indirect, so it’s harder to measure. Even worse, your stakeholders want to know if your work is creating an impact and value, but because you can’t easily put numbers on it, valuation spirals into a messy problem.   The messy part of this valuation problem is what today’s episode is all about—not math! Value is largely subjective, not objective, and I think this is partly why analytical teams may struggle with this. To improve at how you estimate the value of your data products, you need to leverage other skills—and stop approaching this as a math problem.   As a consulting product designer, estimating value when it’s indirect is something that I’ve dealt with my entire career. It’s not a skill learned overnight, and it’s one you will need to keep developing over time—but the basic concepts are simple. I hope you’ll find some value in applying these along with your other frameworks and tools.    Highlights/ Skip to   Value is subjective, not objective (5:01) Measurability does not necessarily mean valuable (6:36) Businesses are made up of humans. Most b2b stakeholders aren’t spending their own money when making business decisions—what does that mean for your work? (9:30) Quantifying a data product’s value starts with understanding what is worth measuring in the eye of the beholder(s)—not math calculations (13:44) The more difficult it is to show the value of your product (or team) in numbers, the lower that value is to the stakeholder—initially (16:46) By simply helping a stakeholder to think through how value should be calculated on a data product, you’re likely already providing additional value (18:02) Focus on expressing estimated value via a range versus a single number (19:36) Measurement of anything requires that we can observe the phenomenon first—but many stakeholders won’t be able to cite these phenomena without [your!] help (22:16) When you are measuring quantitative aspects of value, remember that measurement is not the same as accuracy (precision)—and the precision game can become a trap (25:37) How to measure anything—and why estimates often trump accuracy (31:19) Why you may need to steer the conversation away from ROI calculations in the short term (35:00)   Quotes from Today’s Episode Even when you can easily assign a dollar value to the data product you’re building, that does not necessarily reflect what your stakeholder actually feels about it—or your team’s contribution. So, why do they keep asking you to quantify the value of your work? By actually understanding what a shareholder needs to observe for them to know progress has been made on their initiative or data product, you will be positioned to deliver results they actually care about. While most of the time, you should be able to show some obvious economic value in the work you’re doing, you may be getting hounded about this because you’re not meeting the often unstated qualitative goals. If you can surface the qualitative goals of your stakeholder, then the perception of the value of your team and its work goes up, and you’ll spend less time trying to measure an indirect contribution in quant terms that only has a subjectively right answer. (6:50) The more difficult it is for you to show the monetary value of your data product (or team), the lower that value likely is to the stakeholder. This does not mean the value of your work is “low.” It means it’s perceived as low because it cannot be easily quantified in a way that is observable to the person whose judgment matters. By understanding the personal motivations and interests of your stakeholders, you can begin to collaboratively figure out what the correct success metrics should be—and how they’d be measured. By just simply beginning to ask and uncover what they’re trying to measure, you can start to increase your contributions’ perceived value. (17:01) Think about expressing “indirect value” as a range, not a precise single value. It’s much easier to refine your estimate (if necessary) once a range has been defined, and you only need to get precise enough for your stakeholder to make a decision with the information. How much time should you spend refining your measurement of the value? Potentially little to none—if the “better math” isn’t going to change anyone’s mind or decision.  Spending more time to measure a data product’s value more accurately takes you away from doing actual product work—and if there isn’t much obvious value to the work, maybe the work—not the measurement of the work—needs to change. (19:49) Smart leaders know that deriving a simple calculation of indirect contributions is complex—otherwise, the topic wouldn’t keep coming up. There is a “why” behind why they’re asking, and when you understand the “why,” you’ll be better positioned to deliver the value they actually seek, using valuation measurements that are “just enough” in their precision. What do you think it says to a stakeholder if you’re spending an inordinate amount of time simply trying to calculate and explain the value of your data product? (23:22) Many organizations for years have invested in things that don’t always have a short term ROI.  They know that ROI takes time, and they can’t really measure what it’s worth along the way. Examples include investments in company culture, innovation, brand reputation, and many others. If you’re constantly playing defense and having to justify your existence or methods by quantifying the financial value of your data products (or data product management team, or UX team, or any other indirect contributor/contribution), then either your work truly does lack value, or you haven’t surfaced what the actual success metrics and outcomes are— in the eyes of the stakeholder. As such, the perceived value is “low” or opaque. They might be looking for a hard number to assign to it because they’re not seeing any of the other forms of value that they care about that would indicate positive progress. It’s easier to write [you] a large check  for a big, innovative, unproven initiative if your stakeholders know what you and your team can accomplish with a small check. (35:16)   Links Experiencing Data: Episode 80 with Doug Hubbard
    --------  
    41:41
  • 162 - Beyond UI: Designing User Experiences for LLM and GenAI-Based Products
    I’m doing things a bit differently for this episode of Experiencing Data. For the first time on the show, I’m hosting a panel discussion. I’m joined by Thomson Reuters’s Simon Landry, Sumo Logic’s Greg Nudelman, and Google’s Paz Perez to chat about how we design user experiences that improve people’s lives and create business impact when we expose LLM capabilities to our users.    With the rise of AI, there are a lot of opportunities for innovation, but there are also many challenges—and frankly, my feeling is that a lot of these capabilities right now are making things worse for users, not better. We’re looking at a range of topics such as the pros and cons of AI-first thinking, collaboration between UX designers and ML engineers, and the necessity of diversifying design teams when integrating AI and LLMs into b2b products.    Highlights/ Skip to  Thoughts on how the current state of LLMs implementations and its impact on user experience (1:51)  The problems that can come with the "AI-first" design philosophy (7:58)  Should a company's design resources be spent on go toward AI development? (17:20) How designers can navigate "fuzzy experiences” (21:28) Why you need to narrow and clearly define the problems you’re trying to solve when building LLMs products (27:35) Why diversity matters in your design and research teams when building LLMs (31:56)  Where you can find more from Paz, Greg, and Simon (40:43)   Quotes from Today’s Episode “ [AI] will connect the dots. It will argue pro, it will argue against, it will create evidence supporting and refuting, so it’s really up to us to kind of drive this. If we understand the capabilities, then it is an almost limitless field of possibility. And these things are taught, and it’s a fundamentally different approach to how we build user interfaces. They’re no longer completely deterministic. They’re also extremely personalized to the point where it’s ridiculous.” - Greg Nudelman (12:47) “ To put an LLM into a product means that there’s a non-zero chance your user is going to have a [negative] experience and no longer be your customer. That is a giant reputational risk, and there’s also a financial cost associated with running these models. I think we need to take more of a service design lens when it comes to [designing our products with AI] and ask what is the thing somebody wants to do… not on my website, but in their lives? What brings them to my [product]? How can I imagine a different world that leverages these capabilities to help them do their job? Because what [designers] are competing against is [a customer workflow] that probably worked well enough.” - Simon Landry (15:41) “ When we go general availability (GA) with a product, that traditionally means [designers] have done all the research, got everything perfect, and it’s all great, right? Today, GA is a starting gun. We don’t know [if the product is working] unless we [seek out user feedback]. A massive research method is needed. [We need qualitative research] like sitting down with the customer and watching them use the product to really understand what is happening[…] but you also need to collect data. What are they typing in? What are they getting back? Is somebody who’s typing in this type of question always having a short interaction? Let’s dig into it with rapid, iterative testing and evaluation, so that we can update our model and then move forward. Launching a product these days means the starting guns have been fired. Put the research to work to figure out the next step.” - (23:29) Greg Nudelman “ I think that having diversity on your design team (i.e. gender, level of experience, etc.) is critical. We’ve already seen some terrible outcomes. Multiple examples where an LLM is crafting horrendous emails, introductions, and so on. This is exactly why UXers need to get involved [with building LLMs]. This is why diversity in UX and on your tech team that deals with AI is so valuable. Number one piece of advice: get some researchers. Number two: make sure your team is diverse.” - Greg Nudelman (32:39) “ It’s extremely important to have UX talks with researchers, content designers, and data teams. It’s important to understand what a user is trying to do, the context [of their decisions], and the intention. [Designers] need to help [the data team] understand the types of data and prompts being used to train models. Those things are better when they’re written and thought of by [designers] who understand where the user is coming from. [Design teams working with data teams] are getting much better results than the [teams] that are working in a vacuum.” - Paz Perez (35:19)   Links Milly Barker’s LinkedIn post Greg Nudelman’s Value Matrix Article Greg Nudelman website  Paz Perez on Medium Paz Perez on LinkedIn Simon Landry LinkedIn
    --------  
    42:07
  • 161 - Designing and Selling Enterprise AI Products [Worth Paying For]
    With GenAI and LLMs comes great potential to delight and damage customer relationships—both during the sale, and in the UI/UX. However, are B2B AI product teams actually producing real outcomes, on the business side and the UX side, such that customers find these products easy to buy, trustworthy and indispensable?    What is changing with customer problems as a result of LLM and GenAI technologies becoming more readily available to implement into B2B software? Anything?   Is your current product or feature development being driven by the fact you might be able to now solve it with AI? The “AI-first” team sounds like it’s cutting edge, but is that really determining what a customer will actually buy from you?    Today I want to talk to you about the interplay of GenAI, customer trust (both user and buyer trust), and the role of UX in products using probabilistic technology.     These thoughts are based on my own perceptions as a “user” of AI “solutions,” (quotes intentional!), conversations with prospects and clients at my company (Designing for Analytics), as well as the bright minds I mentor over at the MIT Sandbox innovation fund. I also wrote an article about this subject if you’d rather read an abridged version of my thoughts.   Highlights/ Skip to: AI and LLM-Powered Products Do Not Turn Customer Problems into “Now” and “Expensive” Problems (4:03) Trust and Transparency in the Sale and the Product UX: Handling LLM Hallucinations (Confabulations) and Designing for Model Interpretability (9:44) Selling AI Products to Customers Who Aren’t Users (13:28) How LLM Hallucinations and Model Interpretability Impact User Trust of Your Product (16:10) Probabilistic UIs and LLMs Don’t Negate the Need to Design for Outcomes (22:48) How AI Changes (or Doesn’t) Our Benchmark Use Cases and UX Outcomes (28:41) Closing Thoughts (32:36)   Quotes from Today’s Episode “Putting AI or GenAI into a product does not change the urgency or the depth of a particular customer problem; it just changes the solution space. Technology shifts in the last ten years have enabled founders to come up with all sorts of novel ways to leverage traditional machine learning, symbolic AI, and LLMs to create new products and disrupt established products; however, it would be foolish to ignore these developments as a product leader. All this technology does is change the possible solutions you can create. It does not change your customer situation, problem, or pain, either in the depth, or severity, or frequency. In fact, it might actually cause some new problems. I feel like most teams spend a lot more time living in the solution space than they do in the problem space. Fall in love with the problem and love that problem regardless of how the solution space may continue to change.” (4:51) “Narrowly targeted, specialized AI products are going to beat solutions trying to solve problems for multiple buyers and customers. If you’re building a narrow, specific product for a narrow, specific audience, one of the things you have on your side is a solution focused on a specific domain used by people who have specific domain experience. You may not need a trillion-parameter LLM to provide significant value to your customer. AI products that have a more specific focus and address a very narrow ICP I believe are more likely to succeed than those trying to serve too many use cases—especially when GenAI is being leveraged to deliver the value. I think this can be true even for platform products as well. Narrowing the audience you want to serve also narrows the scope of the product, which in turn should increase the value that you bring to that audience—in part because you probably will have fewer trust, usability, and utility problems resulting from trying to leverage a model for a wide range of use cases.” (17:18) “Probabilistic UIs and LLMs are going to create big problems for product teams, particularly if they lack a set of guiding benchmark use cases. I talk a lot about benchmark use cases as a core design principle and data-rich enterprise products. Why? Because a lot of B2B and enterprise products fall into the game of ‘adding more stuff over time.’ ‘Add it so you can sell it.’ As products and software companies begin to mature, you start having product owners and PMs attached to specific technologies or parts of a product. Figuring out how to improve the customer’s experience over time against the most critical problems and needs they have is a harder game to play than simply adding more stuff— especially if you have no benchmark use cases to hold you accountable. It’s hard to make the product indispensable if it’s trying to do 100 things for 100 people.“ (22:48) “Product is a hard game, and design and UX is by far not the only aspect of product that we need to get right. A lot of designers don’t understand this, and they think if they just nail design and UX, then everything else solves itself. The reason the design and experience part is hard is that it’s tied to behavior change– especially if you are ‘disrupting’ an industry, incumbent tool, application, or product. You are in the behavior-change game, and it’s really hard to get it right. But when you get it right, it can be really amazing and transformative.” (28:01) “If your AI product is trying to do a wide variety of things for a wide variety of personas, it’s going to be harder to determine appropriate benchmarks and UX outcomes to measure and design against. Given LLM hallucinations, the increased problem of trust, model drift problems, etc., your AI product has to actually innovate in a way that is both meaningful and observable to the customer. It doesn’t matter what your AI is trying to “fix.” If they can’t see what the benefit is to them personally, it doesn’t really matter if technically you’ve done something in a new and novel way. They’re just not going to care because that question of what’s in it for me is always sitting behind, in their brain, whether it’s stated out loud or not.” (29:32)   Links Designing for Analytics mailing list
    --------  
    34:00
  • 160 - Leading Product Through a Merger/Acquisition: Lessons from The Predictive Index’s CPO Adam Berke
    Today, I’m chatting with Adam Berke, the Chief Product Officer at The Predictive Index. For 70 years, The Predictive Index has helped customers hire the right employees, and after the merger with Charma, their products now nurture the employee/manager relationship. This is something right up Adam’s alley, as he previously helped co-found the employee and workflow performance management software company Charma before both aforementioned organizations merged back in 2023.   You’ll hear Adam talk about the first-time challenges (and successes) that come with integrating two products and two product teams, and why squashing out any ambiguity with overindexing (i.e. coming prepared with new org charts ASAP) is essential during the process.    Integrating behavioral science into the world of data is what has allowed The Predictive Index to thrive since the 1950s. While this is the company’s main selling point, Adam explains how the science-forward approach can still create some disagreements–and learning opportunities–with The Predictive Index’s legacy customers. Highlights/ Skip to: What is The Predictive Index and how does the product team conduct their work (1:24)  Why Charma merged with The Predictive Index (5:11)  The challenges Adam has faced as a CPO since the Charma/Predictive Index merger (9:21) How Predictive Index has utilized behavioral science to remove the guesswork of hiring (14:22) The makeup of the product team that designs and delivers The Predictive Index's products (20:24)  Navigating the clashes between changing science and Predictive Index's legacy customers (22:37)  How The Predictive Index analyzes the quality of their products with multiple user data metrics (27:21) What Adam would do differently if had to redo the merger (37:52)  Where you can find more from Adam and The Predictive Index (41:22)   Quotes from Today’s Episode “ Acquisitions are complicated. Outside of a few select companies, there are very few that have mergers and acquisitions as a repeatable discipline. More often than not, neither [company in the merger] has an established playbook for how to do this. You’re [acquiring a company] because of its product, team, or maybe even one feature. You have different theories on how the integration might look, but experiencing it firsthand is a whole different thing.  My initial role didn’t exist in [The Predictive Index] before. The rest of the whole PI organization knows how to get their work done before this, and now there’s this new executive. There’s just tons of [questions and confusion] if you don’t go in assuming good faith and be willing to work through the bumps. It’s going to get messy.” - Adam Berke (9:41) “We integrated the teams and relaunched the product. Charma became [a part of the product called] PI Perform, and right away there was re-skinning, redesign, and some back-end architecture that needed to happen to make it its own module. From a product perspective, we’re trying to deliver [Charma’s] unique value prop. That’s when we can start [figuring out how to] infuse PI’s behavioral science into these workflows. We have this foundation. We got the thing organized. We got the teams organized. We were 12 people when we were acquired… and here we are a year later. 150+ new customers have been added to PI Perform because it’s accelerating now that we’re figuring out the product.” - Adam Berke (12:18) “Our product team has the roles that you would expect: a PM, researcher, ux design, and then one atypical role–a PhD behavioral scientist. [Our product already had] suggested topics and templates [for manager/IC one-on-one meetings], but now we want to make those templates and suggested topics more dynamic. There might be different questions to draw out a better discussion, and our behavioral scientists help us determine [those questions]... [Our behavioral scientists] look at the science, other research, and calibrate [the one-on-one questions] before we implement them into the product.” - Adam Berke (21:04) “We’ve adapted the technology and science over time as they move forward. We want to update the product with the most recent science, but there are customers who have used this product in a certain way for decades in some cases. Our desire is to follow the science… but you can’t necessarily stop people from using the stuff in a way that they used it 20 years ago. We sometimes end up with disagreements [with customers over product changes based on scientific findings], and those are tricky conversations.  But even in that debate… it comes down to all the best practices you would follow in product development in general–listening to your customers, asking that additional ‘why’ question, and trying to get to root causes.” - Adam Berke (23:36) “ We’re doing an upgrade to our platform right now trying to figure out how to manage user permissions in the new version of the product. The way that we did it in the old version had a lot of problems associated… and we put out a survey. “Hey, do you use this to do X?’ We got hundreds of responses and found that half of them were not using it for the reason that we thought they were. At first, we thought thousands of people were going to have deep, deep sensitivities to tweaks in how this works, and now we realize that it might be half that, at best. A simple one-question survey asked about the right problem in the right way can help to avoid a lot of unnecessary thrashing on a product problem that might not have even existed in the first place.” - Adam Berke (35:22)   Links Referenced The Predictive Index: https://www.predictiveindex.com/ LinkedIn: https://www.linkedin.com/in/adamberke/
    --------  
    42:10
  • 159 - Uncorking Customer Insights: How Data Products Revealed Hidden Gems in Liquor & Hospitality Retail
    Today, I’m talking to Andy Sutton, GM of Data and AI at Endeavour Group, Australia's largest liquor and hospitality company. In this episode, Andy—who is also a member of the Data Product Leadership Community (DPLC)—shares his journey from traditional, functional analytics to a product-led approach that drives their mission to leverage data and personalization to build the “Spotify for wines.” This shift has greatly transformed how Endeavour’s digital and data teams work together, and Andy explains how their advanced analytics work has paid off in terms of the company’s value and profitability.     You’ll learn about the often overlooked importance of relationships in a data-driven world, and how Andy sees the importance of understanding how users do their job in the wild (with and without your product(s) in hand). Earlier this year, Andy also gave the DPLC community a deeper look at how they brew data products at EDG, and that recording is available to our members in the archive.   We covered: What it was like at EDG before Andy started adopting a producty approach to data products and how things have now changed (1:52) The moment that caused Andy to change how his team was building analytics solutions (3:42) The amount of financial value that Andy's increased with his scaling team as a result of their data product work (5:19) How Andy and Endeavour use personalization to help build “the Spotify of wine” (9:15) What the team under Andy required in order to make the transition to being product-led (10:27) The successes seen by Endeavour through the digital and data teams’ working relationship (14:04) What data product management looks like for Andy’s team (18:45) How Andy and his team find solutions to  bridging the adoption gap (20:53) The importance of exposure time to end users for the adoption of a data product (23:43) How talking to the pub staff at EDG’s bars and restaurants helps his team build better data products (27:04) What Andy loves about working for Endeavour Group (32:25) What Andy would change if he could rewind back to 2022 and do it all over (34:55) Final thoughts (38:25)     Quotes from Today’s Episode “I think the biggest thing is the value we unlock in terms of incremental dollars, right? I’ve not worked in analytics team before where we’ve been able to deliver a measurable value…. So, we’re actually—in theory—we’re becoming a profit center for the organization, not just a cost center. And so, there’s kind of one key metric. The second one, we do measure the voice of the team and how engaged our team are, and that’s on an upward trend since we moved to the new operating model, too. We also measure [a type of] “voice of partner” score [and] get something like a 4.1 out of 5 on that scale. Those are probably the three biggest ones: we’re putting value in, and we’re delivering products, I guess, our internal team wants to use, and we are building an enthused team at the same time.” - Andy Sutton (16:18) “ You can put an [unfinished] product in front of an end customer, and they will give you quality feedback that you can then iterate on quickly. You can do that with an internal team, but you’ll lose credibility. Internal teams hold their analytics colleagues to a higher standard than the external customers. We’re trying to change how people do their roles. People feel very passionate about the roles they do, and how they do them, and what they bring to that role. We’re trying to build some of that into products. It requires probably more design consideration than I’d anticipated, and we’re still bringing in more designers to help us move closer to the start line.’” - Andy Sutton (19:25) “ [Customer research] is becoming critical in terms of the products we’re building. You’re building a product, a set of products, or a process for an operations team. In our context, an operations team can mean a team of people who run a pub. It’s not just about convincing me, my product managers, or my data scientists that you need research; we want to take some of the resources out of running that bar for a period of time because we want to spend time with [the pub staff] watching, understanding, and researching. We’ve learned some of these things along the way… we’ve earned the trust, we’ve earned that seat at the table, and so we can have those conversations. It’s not trivial to get people to say, ‘I’ll give you a day-long workshop, or give you my team off of running a restaurant and a bar for the day so that they can spend time with you, and so you can understand our processes.’” -  Andy Sutton (24:42) “ I think what is very particular to pubs is the importance of the interaction between the customer and the person serving the customer. [Pubs] are about the connections between the staff and the customer, and you don’t get any of that if you’re just looking at things from a pure data perspective… You don’t see the [relationships between pub staff and customer] in the [data], so how do you capture some of that in your product? It’s about understanding the context of the data, not just the data itself.” - Andy Sutton (28:15) “Every winery, every wine grower, every wine has got a story. These conversations [and relationships] are almost natural in our business. Our CEO started work on the shop floor in one of our stores 30 years ago. That kind of relationship stuff percolates through the organization. Having these conversations around the customer and internal stakeholders in the context of data feels a lot easier because storytelling and relationships are the way we get things done. An analytics team may get frustrated with people who can’t understand data, but it’s [the analytics team’s job] to help bridge that gap.” - Andy Sutton (32:34)     Links Referenced LinkedIn: https://www.linkedin.com/in/andysutton/  Endeavour Group: https://www.endeavourgroup.com.au/    Data Product Leadership Community https://designingforanalytics.com/community
    --------  
    40:47

Mais podcasts de Tecnologia

Sobre Experiencing Data w/ Brian T. O’Neill (UX for AI Products, Analytics SAAS and Data Product Management)

If you’re a leader tasked with generating business and org. value through ML/AI and analytics, you’ve probably struggled with low user adoption. Making the tech gets easier, but getting users to use, and buyers to buy, remains difficult—but you’ve heard a ”data product” approach can help. Can it? My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I offer you a consulting designer’s perspective on why creating ML and analytics outputs isn’t enough to create business and UX outcomes. How can UX design and product management help you create innovative ML/AI and analytical data products? What exactly are data products—and how can data product management help you increase user adoption of ML/analytics—so that stakeholders can finally see the business value of your data? Every 2 weeks, I answer these questions via solo episodes and interviews with innovative chief data officers, data product management leaders, and top UX professionals. Hashtag: #ExperiencingData. PODCAST HOMEPAGE: Get 1-page summaries, text transcripts, and join my Insights mailing list: https://designingforanalytics.com/ed ABOUT THE HOST, BRIAN T. O’NEILL: https://designingforanalytics.com/bio/
Site de podcast

Ouça Experiencing Data w/ Brian T. O’Neill (UX for AI Products, Analytics SAAS and Data Product Management), Acquired e muitos outros podcasts de todo o mundo com o aplicativo o radio.net

Obtenha o aplicativo gratuito radio.net

  • Guardar rádios e podcasts favoritos
  • Transmissão via Wi-Fi ou Bluetooth
  • Carplay & Android Audo compatìvel
  • E ainda mais funções
Aplicações
Social
v7.8.0 | © 2007-2025 radio.de GmbH
Generated: 2/21/2025 - 11:54:14 PM