Skip to main content
Joint DSI Submission to UN Advisory Body on AI

Joint DSI Submission to UN Advisory Body on AI

October 23, 2023 | ESMT Berlin
Three DSI researchers responded to the Office of the UN Secretary-General’s Envoy on Technology’s call for papers on global AI governance in advance of the first meeting of the Multistakeholder Advisory Body on AI.

In their submission titled "Objective assessment of reasonable machines? Role and limitations of risk management in the

European AI regulation efforts
", DSI researchers Nils Brinker, Richard Skalt and Helene Pleil provide a critique of AI regulatory policy relying on risk management processes.

Delving deeper into the EU AI Act, the researchers argue that despite not formulating fully articulated and formalized risk management for the use of generally prohibited AI-systems, it contains requirements for the use of otherwise prohibited AI-systems, namely real-time biometric identification in public spaces, also contain criteria to weigh affected rights and interests, as well as criteria to minimize the impact of their use (see Art. 5 (2) AI-Act-P). As all of the listed potential use cases include the use of AI by authorities, the means to evaluate its proportionality are (or at least resemble) methods well established in public law.

While the use of some AI-systems may be allowed for specific use cases that are deemed acceptable, it neglects the characteristic of AI as a universal tool. Once the technical infrastructure is implemented, the question of the use for other purposes becomes a legal question, not one of technical possibility. Technically, it is no different to use biometric identification to surveil political dissidents or search for missing children. Therefore, a technical implementation with only legal barriers for human-rights-violating use cases requires a functioning constitutional state.

Despite seeing the merits and logic underpinning the use of regulatory approaches based on risk management, the authors stress that risk management is not a process with a clear and deterministic outcome. Each step in the process leaves room for error, interpretation, and individual judgment and preference. This begins with the risk identification step, which enumerates the potential harms of an AI system.

Furthermore, they argue that while risk management generally requires an objective inventory, the subjective view of the entity performing the risk management process cannot be eliminated. This is also true for the following risk management steps: risk assessment and implementation of mitigation measures. The AI Act attempts to minimize such subjective factors by explicitly stating the factors to be considered in risk management (Art. 9 AI-Act-P).



This subjectivity is especially true when it comes to intangible damages, which cannot be definitively quantified. Even if qualitative assessments should be as objective as possible, they cannot be mathematically precise but are based on verbal arguments. Even if these verbal arguments can be assigned numerical values, this assignment is based on subjective associations.

Therefore it is possible that the result of a risk assessment is not a matter of facts but of rhetoric. While acknowledging that the subjectivity of the risk management process is not a new phenomenon, the authors stress that it must be considered when using risk management as a regulatory tool. It helps to look at the roots of risk management not as a definitive tool to algorithmically calculate future steps but (only) as a means to facilitate decision-making.

You can download the submission below or directly by clicking here.