AI is not limited to generative AI, but it can be applied to a multitude of use cases. In this context, the EU is the first to try to finally regulate AI as part of its digital strategy (EU Digital Strategy). The EU AI Act (AIA) is a regulation that proposes creating harmonized rules on AI systems, aiming at regulating AI technologies. The AIA should pass into law in early 2024.
The AIA introduces the concept of risk in AI systems, classifying them into four possible levels: Unacceptable, High, Limited or Minimal.
Unacceptable risk is related to applications that are a possible threat to people, for instance, behavioural manipulation, social scoring or remote biometric identification systems (e.g. facial recognition).
High-risk AI systems are the ones that can potentially affect safety and fundamental rights, and can be classified into two categories:
- AI systems that are used in products falling under the EU’s product safety legislation which includes toys, aviation, cars, medical devices and lifts;
- AI systems that fall into one of eight specific areas that will have to be registered in an EU database: Biometric identification and categorization of natural persons; management and operation of critical infrastructure; education and vocational training; employment, worker management and access to self-employment; access to and enjoyment of essential private services and public services and benefits; law enforcement; migration, asylum and border control management; assistance in legal interpretation and application of the law.
The AIA is going to introduce a requirement for high-risk AI systems to create and maintain a risk management approach for the entire lifecycle of the AI system, determining appropriate mitigation measures. AI systems will need to be validated for their intended use, with tests made on metrics defined as a priority and validated on probabilistic thresholds. Data governance controls must be established, including training, validation and testing datasets that are complete, error-free and representative of the problem solved. Detailed technical documentation, including system architecture, algorithmic design, and model specifications, must be provided. Automatic logging of events while the system is running, and enabling sufficient transparency by design, will allow users to interpret the output. Human oversight should be guaranteed by design at all times, minimizing risks to health and safety or fundamental rights.
These concepts are extremely familiar to PQE Group, which has 25 years of experience in the context of risk-based quality assurance in the highly regulated Life Sciences sector. PQE Group provides compliance solutions to help customers achieve their goals, and has mastered the management of risk-based quality assurance approaches for complex digital systems. The AIA will pose an incredible stimulus, as new sectors can also leverage PQE Group’s knowledge in assuring quality by design, managing and mitigating risks in digital systems, and providing digital governance.