Faithful or Traitor? The Right of Explanation in a Generative AI World: CIPIL Evening Seminar
Speaker: Professor Lilian Edwards, Emeritus Professor of Law, Innovation & Society, Newcastle Law School Biography: Lilian Edwards is a leading academic in the field of Internet law. She has taught information technology law, e-commerce law, privacy law and Internet law at undergraduate and postgraduate level since 1996 and been involved with law and artificial intelligence (AI) since 1985. She is now Emerita Professor at Newcastle and Honorary Professor at CREAte, University of Glasgow, which she helped co-found. She is the editor and major author of Law, Policy and the Internet, one of the leading textbooks in the field of Internet law (Hart, 2018, new edition forthcoming with Urquhart and Goanta, 2026). She won the Future of Privacy Forum award in 2019 for best paper ("Slave to the Algorithm" with Michael Veale) and the award for best non-technical paper at FAccT in 2020, on automated hiring. In 2004 she won the Barbara Wellberry Memorial Prize in 2004 for work on online privacy where she invented the notion of data trusts, a concept which ten years later has been proposed in EU legislation. She is a former fellow of the Alan Turing Institute on Law and AI, and the Institute for the Future of Work. Edwards has consulted for inter alia the EU Commission, the OECD, and WIPO.Abstract: The right to an explanation is having another moment. Well after the heyday of 2016-2018 when scholars tussled over whether the GDPR ( in either art 22 or arts 13-15) conferred a right to explanation, the CJEU case of Dun and Bradstreet has finally confirmed its existence, and the Platform Work Directive has wholesale revamped art 22 in its Algorithmic Management chapter. Most recently the EU AI Act added its own Frankenstein-like right to an explanation (art 86) of AI systems .None of these provisions however pin down what the essence of the explanation should be, given many notions can be invoked here ; a faithful description of source code or training data; an account that enables challenge or contestation; a “plausible” description that may be appealing in a behaviouralist sense but might be actually misleading when operationalised eg to generate a medical course of treatment. Agarwal et al argue that the tendency of UI designers, and regulators and judges alike to lean towards the plausibility end, may be unsuited to large language models which represent far more of a black box in size and optimisation than conventional machine learning, and which are trained to present encouraging but not always accurate accounts of their workings. Yet this is also the direction of travel taken by CJEU Dun & Bradstreet , above. This paper argues that explanations of large model outputs may present novel challenges needing thoughtful legal mandates.For more information (and to download slides) see: https://www.cipil.law.cam.ac.uk/seminars-and-events/cipil-seminars