This proposal would introduce limitations and transparency rules on the use of AI systems, including chatbots like ChatGPT. If the proposal becomes law, all AI systems would have to be used in accordance with these rules.
This new regulation should apply uniformly across all member states and use a risk-based approach. AI systems are classified into four main groups ranging from unacceptable to minimal risk.
Possible prohibitions on the use of AI
AI systems that pose a clear threat to human life, livelihood, and rights fall into the group of unacceptable risks. It is therefore intended that their use be prohibited. Systems that go against the free will of people, manipulate human behavior or perform social scoring will also be banned.
In the high-risk group are areas such as critical infrastructure, education, credit ratings, testing, immigration, asylum and border management, travel document verification, biometric identification systems, and judicial and democratic processes. The requirements imposed on these systems before they are launched on the market must be very strict. They must be non-discriminatory and their results must be observable and subject to adequate human supervision.
Security units may use biometric identification systems in public spaces, such as facial recognition in the case of terrorism and serious crimes. However, these uses of AI systems will be limited and subject to judicial authorization.
Chatbots such as ChatGPT fall into the limited risk group. Users must be aware that they are interacting with a computer and not a real person. This rule will ensure user safety while allowing the use of this type of technology.
In conclusion, the European Union is working to establish a framework of clear rules for the use of AI, including chatbots such as ChatGPT. These rules aim to ensure safety and human rights while promoting technological innovation.