A leaked draft of the European Commission's (Commission) rules on the use of artificial intelligence (AI) in the European Union (EU) proposes tough new "human-centric" rules for "high risk" AI. The proposed rules further seek to ban certain types of AI applications - including those used for mass surveillance and social credit scores - and to regulate the use of others.
Similar to the GDPR, the rules are proposed to have extra-territorial effect, potentially having a significant impact on the global AI ecosystem. New Zealand companies that develop or sell AI in the EU could be fined up to 4 per cent of their annual turnover for non-compliance.
The rules' objectives are to ensure that AI in the EU is transparent, has appropriate human oversight, and meets the EU's high standards of privacy. While an official version of the proposed rules is yet to be released, the leaked draft has already been heavily criticised for taking a simplistic approach to determining "high-risk" AI, with the potential to stifle innovation.
Key regulations
Some key rules that have been reported ahead of their official release are:[1] [2]
- Ban on certain types of AI applications: AI used for applications such as "indiscriminate surveillance" and social credit scores would be banned in the EU.
- Use of recognition technology: Special authorisation would be required to use remote biometric identification systems like facial recognition in public spaces.
- Notifications: Individuals would need to be notified if they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.
- Assessments for high-risk AI: EU member states would be required to set up assessment boards to test and validate high-risk AI systems before they are implemented. High-risk AI systems are likely to include AI applications used to scan CVs, make credit worthiness assessments, or help judges make decisions.[3]
- Kill-switch: A "kill-switch" functionality would be required to be baked into high-risk AI systems, to instantly turn the system off if needed.
- European Artificial Intelligence Board: The rules would establish the European Artificial Intelligence Board, which would consist of representatives from every EU member state, to help the Commission decide which AI systems count as "high-risk" and to recommend changes to prohibitions.
An official version of the rules is expected to be released on 21 April and following this, feedback from the European Parliament and EU member states is likely to be sought.
The Russell McVeagh team will be monitoring the developments and will provide a further update when the rules have been formally released. In the meantime, if you have any questions relating to the new rules, or how they may relate to you, please contact us on the details below.