New European Commission rules for Artificial Intelligence – Questions and Answers

Filed under: International,Workplace Technology |

New European Commission – Questions and Answers

Rules for Artificial Intelligence — Why do we need to regulate the use of Artificial Intelligence technology?

The potential benefits of AI for our societies are manifold from improved medical care to better education. Faced with the rapid technological development of AI, the EU must act as one to harness these opportunities. While most AI systems will pose low to no risk, certain AI systems create risks that need to be addressed to avoid undesirable outcomes.

European rules for artificial intelligence

For example, the opacity of many algorithms may create uncertainty and hamper the effective enforcement of the existing legislation on safety and fundamental rights. Responding to these challenges, legislative action is needed to ensure a well-functioning internal market for AI systems where both benefits and risks are adequately addressed. This includes applications such as biometric identification systems or AI decisions touching on important personal interests, such as in the areas of recruitment, education, healthcare or law enforcement. The Commission’s proposal for a regulatory framework on AI aims to ensure the protection of fundamental rights and user safety, as well as trust in the development and uptake of AI.

Which risks will the new AI rules address?

The uptake of AI systems has a strong potential to bring societal benefits, economic growth and enhance EU innovation and global competitiveness. However, in certain cases, the specific characteristics of certain AI systems may create new risks related to user safety and fundamental rights. This leads to legal uncertainty for companies and potentially slower uptake of AI technologies by businesses and citizens, due to the lack of trust. Disparate regulatory responses by national authorities would risk fragmenting the internal market.

To whom does the proposal apply?

The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market or its use affects people located in the EU. It can concern both providers (e.g. a developer of a CV-screening tool) and users of high-risk AI systems (e.g. a bank buying this resume screening tool). It does not apply to private, non-professional uses.

What are the risk categories?

The Commission proposes a risk–based approach, with four levels of risk:

Unacceptable risk: A very limited set of particularly harmful uses of AI that contravene EU values because they violate fundamental rights (e.g. social scoring by governments, exploitation of vulnerabilities of children, use of subliminal techniques, and – subject to narrow exceptions – live remote biometric identification systems in publicly accessible spaces used for law enforcement purposes) will be banned.

High-risk: A limited number of AI systems defined in the proposal, creating an adverse impact on people’s safety or their fundamental rights (as protected by the EU Charter of Fundamental Rights) are considered to be high-risk. Annexed to the proposal is the list of high-risk AI systems, which can be reviewed to align with the evolution of AI use cases (future-proofing).

These also include safety components of products covered by sectorial Union legislation. They will always be high-risk when subject to third-party conformity assessment under that sectorial legislation.

In order to ensure trust and a consistent and high level of protection of safety and fundamental rights, mandatory requirements for all high-risk AI systems are proposed. Those requirements cover the quality of data sets used; technical documentation and record keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy and cybersecurity. In case of a breach, the requirements will allow national authorities to have access to the information needed to investigate whether the use of the AI system complied with the law.

The proposed framework is consistent with the Charter of Fundamental Rights of the European Union and in line with the EU’s international trade commitments.

Limited risk: For certain AI systems specific transparency requirements are imposed, for example where there is a clear risk of manipulation (e.g. via the use of chatbots). Users should be aware that they are interacting with a machine. 

Minimal risk: All other AI systems can be developed and used subject to the existing legislation without additional legal obligations. The vast majority of AI systems currently used in the EU fall into this category. Voluntarily, providers of those systems may choose to apply the requirements for trustworthy AI and adhere to voluntary codes of conduct.

How did you select the list

of stand- alone high-risk AI systems (none embedded in products)? Will you update it?

Together with a clear definition of ‘high-risk’, the Commission puts forward a solid methodology that helps identifying high-risk AI systems within the legal framework. This aims to provide legal certainty for businesses and other operators.

The risk classification is based on the intended purpose of the AI system, in line with the existing EU product safety legislation. It means that the classification of the risk depends on the function performed by the AI system and on the specific purpose and modalities for which the system is used.

The criteria for this classification include the extent of the use of the AI application and its intended purpose, the number of potentially affected persons, the dependency on the outcome and the irreversibility of harms, as well as the extent to which existing Union legislation provides for effective measures to prevent or substantially minimise those risks.

A list of certain critical fields helps to make the classification clearer by identifying these applications in the areas of biometric identification and categorisation, critical infrastructure, education, recruitment and employment, provision of important public and private services as well as law enforcement, asylum and migration and justice.

Annexed to the proposal is a list of use cases which the Commission currently considers to be high-risk. The Commission will ensure that this list is kept up to date and relevant, based on the above mentioned criteria, evidence, and expert opinions in broad consultation with stakeholders.

How does the proposal address remote biometric identification?

Under the new rules, all AI systems intended to be used for remote biometric identification of persons will be considered high-risk and subject to an-ex ante third party conformity assessment including documentation and human oversight requirements by design. High quality data sets and testing will help to make sure such systems are accurate and there are no discriminatory impacts on the affected population.

The use of real-time remote biometric identification in publicly accessible spaces for law enforcement purposes poses particular risks for fundamental rights, notably human dignity, respect for private and family life, protection of personal data and non-discrimination. It is therefore prohibited in principle with a few, narrow exceptions that are strictly defined, limited and regulated. They include the use for law enforcement purposes for the targeted search for specific potential victims of crime, including missing children; the response to the imminent threat of a terror attack; or the detection and identification of perpetrators of serious crimes.

Finally, all emotion recognition and biometric categorisation systems will always be subject to specific transparency requirements. They will also be considered high-risk applications if they fall under the use cases identified as such, for example in the areas of employment, education, law enforcement, migration and border control.

Why are particular rules needed for remote biometric identification? 

Biometric identification can take different forms. It can be used for user authentication i.e. to unlock a smartphone or for verification/authentication at border crossings to check a person’s identity against his/her travel documents (one-to-one matching). Biometric identification could also be used remotely, for identifying people in a crowd, where for example an image of a person is checked against a database (one-to-many matching).

Accuracy of systems for facial recognition can vary significantly based on a wide range of factors, such as camera quality, light, distance, database, algorithm, and the subject’s ethnicity, age or gender. The same applies for gait and voice recognition and other biometric systems. Highly advanced systems are continuously reducing their false acceptance rates. While a 99% accuracy rate may sound good in general, it is considerably risky when the result leads to the suspicion of an innocent person. Even a 0.1% error rate is a lot if it concerns tens of thousands of people.

What are the obligations for providers of high-risk AI systems?

Before placing a high-risk AI system on the EU market or otherwise putting it into service, providers must subject it to a conformity assessment. This will allow them to demonstrate that their system complies with the mandatory requirements for trustworthy AI (e.g. data quality, documentation and traceability, transparency, human oversight, accuracy and robustness). In case the system itself or its purpose is substantially modified, the assessment will have to be repeated. For certain AI systems, an independent notified body will also have to be involved in this process. AI systems being safety components of products covered by sectorial Union legislation will always be deemed high-risk when subject to third-party conformity assessment under that sectorial legislation. Also for biometric identification systems a third party conformity assessment is always required.

Providers of high-risk AI systems will also have to implement quality and risk management systems to ensure their compliance with the new requirements and minimise risks for users and affected persons, even after a product is placed on the market. Market surveillance authorities will support post-market monitoring through audits and by offering providers the possibility to report on serious incidents or breaches of fundamental rights obligations of which they have become aware.

How will compliance be enforced?

Member States hold a key role in the application and enforcement of this Regulation. In this respect, each Member State should designate one or more national competent authorities to supervise the application and implementation, as well as carry out market surveillance activities. In order to increase efficiency and to set an official point of contact with the public and other counterparts, each Member State should designate one national supervisory authority, which will also represent the country in the European Artificial Intelligence Board.

What are the penalties for infringement? *

  • When AI systems are put on the market or in use that do not respect the requirements of the Regulation, Member States will have to lay down effective, proportionate and dissuasive penalties, including administrative fines, in relation to infringements and communicate them to the Commission.
  • The Regulation sets out thresholds that need to be taken into account:
    • Up to €30m or 6% of the total worldwide annual turnover of the preceding financial year (whichever is higher) for infringements on prohibited practices or non-compliance related to requirements on data;
    • Up to €20m or 4% of the total worldwide annual turnover of the preceding financial year for non-compliance with any of the other requirements or obligations of the Regulation;
    • Up to €10m or 2% of the total worldwide annual turnover of the preceding financial year for the supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request.
  • In order to harmonise national rules and practices in setting administrative fines, the Commission, counting on the advice of the Board, will draw up guidelines.
  • As EU Institutions, agencies or bodies should lead by example, they will also be subject to the rules and to possible penalties; the European Data Protection Supervisor will have the power to impose fines to them.

What is the European Artificial Intelligence Board?

The European Artificial Intelligence Board would comprise high-level representatives of competent national supervisory authorities, the European Data Protection Supervisor, and the Commission. Its role will be to facilitate a smooth, effective and harmonised implementation of the new AI Regulation. The Board will issue recommendations and opinions to the Commission regarding high-risk AI systems and on other aspects relevant for the effective and uniform implementation of the new rules. It will also help building up expertise and act as a competence centre that national authorities can consult. Finally, it will also support standardisation activities in the area.

How do the rules protect fundamental rights?

There is already a strong protection for fundamental rights and for non-discrimination in place at EU and Member State level, but complexity and opacity of certain AI applications (‘black boxes’) pose a problem. A human-centric approach to AI means to ensure AI applications comply with fundamental rights legislation. Accountability and transparency requirements for the use of high-risk AI systems, combined with improved enforcement capacities, will ensure that legal compliance is factored in at the development stage. Where breaches occur, such requirements will allow national authorities to have access to the information needed to investigate whether the use of AI complied with EU law.

How does this regulation address racial and gender bias in AI? *

  • It is very important that AI systems do not create or reproduce bias. Rather, when properly designed and used, AI systems can contribute to reduce bias and existing structural discrimination, and thus  lead to more equitable and non-discriminatory decisions (e.g. in recruitment).
  • The new mandatory requirements for all high-risk AI systems will serve this purpose. AI systems must be technically robust to guarantee that the technology is fit for purpose and false positive/negative results are not disproportionately affecting protected groups (e.g. racial or ethnic origin, sex, age etc.).
  • High-risk systems will also need to be trained and tested with sufficiently representative dataset to minimise the risk of unfair biases embedded in the model and ensure that these can be addressed through appropriate bias detection, correction and other mitigating measures.
  • They must also be traceable and auditable, ensuring that appropriate documentation is kept, including of the data used to train the algorithm that would be key in ex post investigations.

Compliance system before and after they are placed on the market will have to ensure these systems are regularly monitored and potential risks are promptly addressed.

What are voluntary codes of conduct?

Providers of non-high-risk applications can ensure that their AI system is trustworthy by developing their own voluntary codes of conduct or adhering to codes of conduct adopted by other representative associations. These will apply simultaneously with the transparency obligations for certain AI systems. The Commission will encourage industry associations and other representative organisations to adopt voluntary codes of conduct.

Will imports of AI systems and applications need to comply with the framework?

Yes. Importers of AI systems will have to ensure that the foreign provider has already carried out the appropriate conformity assessment procedure and has the technical documentation required by the Regulation. Additionally, importers should ensure that their system bears a European Conformity (CE) marking and is accompanied by the required documentation and instructions of use.

How can the new rules support innovation?

The regulatory framework can enhance the uptake of AI in two ways. On the one hand, increasing users’ trust will increase the demand for AI used by companies and public authorities. On the other hand, by increasing legal certainty and harmonising rules, AI providers will access bigger markets, with products that users and consumers appreciate and purchase.

Rules will apply only where strictly needed and in a way that minimises the burden for economic operators, with a light governance structure. In addition, an ecosystem of excellence, including regulatory sandboxes establishing a controlled environment to test innovative technologies for a limited time, access to Digital Innovation Hubs and access to Testing and Experimentation Facilities will help innovative companies, SMEs and start-ups to continue innovating in compliance with the new rules for AI and the other applicable legal rules. These, together with other measures such as the additional Networks of AI Excellence Centres and the Public-Private Partnership on Artificial Intelligence, Data and Robotics will help build the right framework conditions for companies to develop and deploy AI.

What is the international dimension of the EU’s approach?

The proposal for regulatory framework and the Coordinated Plan on AI are part of the efforts of the European Union to be a global leader in the promotion of trustworthy AI at international level. AI has become an area of strategic importance at the crossroads of geopolitics, commercial stakes and security concerns. Countries around the world are choosing to use AI as a way to signal their desires for technical advancement due to its utility and potential. AI regulation is only emerging and the EU will take actions to foster the setting of global AI standards in close collaboration with international partners in line with the rules-based multilateral system and the values it upholds. The EU intends to deepen partnerships, coalitions and alliances with EU partners (e.g. Japan, the US or India) as well as multilateral (e.g. OECD and G20) and regional organisations (e.g. Council of Europe).

List your business in the premium web directory for free This website is listed under Human Resources Directory