We usually see AI being deployed as a sub-system or process optimisation tool in an aggregated collection of other IT and OT systems. Some of these systems, particularly in industrial settings, are intended to remain operational for several decades. The decision to deploy (or not) AI, therefore, needs to be informed not just by requirements from state-of-the-art security, but it should be capable of long-term security evolution. There have been efforts at advancing the state of the art in privacy engineering with respect to legal requirements stemming from the General Data Protection Regulation (e.g., projects such as PDP4EPRiSE). However, the problem with many of the existing approaches is that they are based on the premise that, typically, commercial and consumer software has a generally short lifecycle. In other words, the evolution of a system, or an AI sub-system for that matter, and its compliance with legal regulation over a long time span are not prioritised in the development process. Indeed, the GDPR, in art. 35, requires that “the controller shall carry out a review to assess if processing is performed in accordance with the data protection impact assessment at least when there is  a  change  of  the  risk  represented  by  processing  operations”. How to implement this in practice, though, and how to reconcile it with the fact that planning for ‘long-term’ data processing might come at odds with the principle of storage limitation?

This session will discuss the questions of long-term security evolution of AI from the angle of EU data protection law.

More…