Artificial intelligence (AI) applications are developing at a rapid pace. As they do so, they are influencing more and more areas of our lives. Once they are available on the market, people often no longer think of them as AI applications. Artificial intelligence (AI) has long since taken root in our personal lives in the form of search engines and navigation systems, Siri and Alexa – in other words, gadgets that are sometimes helpful, and sometimes funny.
To some of us, they are full of promise, to others a dystopian nightmare. Predictions about AI engender at least as much scepticism, or even fear, as they do euphoria. AI offers ways of tackling many challenges of our times quickly and efficiently, benefiting both people and the environment. But let’s not fool ourselves. AI applications also involve the risk of unintentional bias – or intentional abuse. While regarded as non-critical when it comes to recommendations of which product to buy or which music to listen to, the decisions taken by AI in autonomous vehicles or on medical issues are of far greater relevance and involve far more serious consequences.
At present, it is unclear where the journey will take us. True, that is hardly helpful when it comes to drawing up a standard – but we will also get nowhere by doing nothing. What we nevertheless can, and intend to do, is draw boundaries and provide guidance for the use of AI in ways that benefit both people and the environment. And people must define frameworks for innovative technologies in full awareness of the need to maintain a human-centred perspective.
The importance of such a framework is shown by our wealth of experience from more than 155 years, throughout which TÜV SÜD has accompanied all industrial revolutions and thousands of technical innovations. Only innovations that are safe and benefit people will evolve into market-relevant technological progress. To unfold their full potential, AI applications must furthermore be accepted and widely used by people.
Minimising risks, enabling opportunities, protecting fundamental rights
While some AI applications are safe and straightforward to use, approaches in other areas may need to be far more sensitive. Clear-cut legal and normative requirements for AI applications are required whenever people’s health and safety or basic fundamental rights are at risk. Given this, we advocate a five-level risk-based system, not only a system of two risk classes (high and low) as proposed by the EU on 21 April 2021 (publication here). The most expedient solution is a risk-based approach that assesses AI applications on the basis of their potential consequences and their probability of occurrence.
And for high-risk applications at least, compliance with normative and regulatory requirements must be verified by impartial third parties if we are to build a stronger, vendor-independent safety net. This, in turn, requires a cohesive and clear body of laws and standards. Building on the EU’s Global Approach and New Legislative Framework (NFL), which are already used to regulate higher-level regulatory aspects, specific addenda will ensure meaningful regulation tailored to the area in which the AI applications will be used.
Our demands and the demands of the market players are the same: In a study conducted by VdTÜV, the umbrella organisation of German TÜV-organisations (only in German: VdTÜV-Unternehmer-Studie 2020), 90% of companies surveyed called for legal regulations to provide clarification in liability issues. 87% of respondents think that AI applications ought to be regulated in accordance with their risk.
Consumers, too, prefer verifiable AI applications: 85% think that only AI products that have been tested and found to be safe by an impartial third party should be launched on the market. And 78% of those surveyed expressed the opinion that the government should pass laws and regulatory acts for AI regulation. This is the result of a representative survey by VdTÜV (only in German: VdTÜV-Verbraucher-Studie).
Regulatory systems are lagging behind
To date, these requirements have not been given adequate consideration in the existing regulatory framework. This is not surprising, because in most cases regulations are only realised with some delay in the wake of technological innovation. However, given the fast-paced nature of AI applications and their significance, we will need to map out the fundamental course of proceedings very soon. This covers both technological and regulatory frameworks, but also includes legal liability issues and ethical requirements for AI applications.
A lot remains to be done. Safety standards must be drawn up and ethical concerns clarified; test scenarios need to be developed and organisations designated to implement them. As a TIC organisation, we are likewise called upon to advance our practices and methods. To cater to these needs, we have been working with VdTÜV to establish an AI Lab, which has already started to address these issues.
Regulation as a catalyst and competitive edge
Clear and reliable regulatory oversight of AI applications with a healthy sense of perspective will generate a competitive edge for Germany and Europe. It will provide companies with a stable, secure and certain regulatory framework for their business operations. Furthermore, a consistent network of standards and legal regulations can serve as a catalyst. By generating transparency and inspiring trust in these new technologies, it will speed up their market penetration. AI will only be able to unfold its full potential if people can trust that AI applications will not disrupt our societal and economic principles – including legal compliance, interoperability, IT security and data protection, but also the ethical principles of the European Union.
To safely steer this fast-paced strand of technological development towards the common and greater good, regulatory oversight must provide a reliable framework for both users and companies, while keeping a sense of proportion. I firmly believe that an assured regulatory framework enhances rather than impairs innovation and economic opportunities.