The EU AI Act and the New Scenario for a Global AI Regulation: AI from Values to Business Growth from a PQE Group Perspective

by: Luca Zanotti Fragonara - Advanced Technologies Competence Centre Lead @PQE Group; Gaurav Walia - Principal SME of Computer Systems Validation, Computer Software Assurance and Data Integrity and Sr. Associate Partner @PQE Group

We live in the era of Artificial Intelligence, and we are now familiar with various aspects of it:  ChatGPT, Copilot, Avatar, algorithms and many more. Artificial Intelligence has become pervasive in many of the daily processes that businesses engage in (e.g., social media, Research & Development, decision-making, manufacturing, cybersecurity, etc.). Its use and development raise numerous ethical and professional questions.

The EU AI Act_Site BAnner

How does PQE Group use Artificial Intelligence? 

PQE Group primarily deals with Artificial Intelligence (AI) in the context of Life Sciences, which includes pharmaceutical companies as well as companies that produce medical devices; both of these industries are highly regulated. In this field, Artificial Intelligence, Machine Learning (ML), and data usage in general have been rapidly accelerating in recent years and are focused on various utility areas. 

There are more advanced areas, such as Research & Development, where, for instance, companies are trying to discover new drug formulas using Artificial Intelligence techniques and models. There are Artificial Intelligence systems that support the production of drugs themselves, such as optimizing production through AI algorithms that analyze data from information systems of the production lines and optimize these systems. Another typical use of AI in Life Sciences is, given the high volume of documentation produced by these companies, the use of AI and ML software to retrieve information (e.g. literature or procedural review, in the style of “agents” answering questions about a specific document). Also, another use case is the utilization of AI to detect cyber-threats and prevent cyber-attacks. 

In a recent use case, in which PQE Group worked in collaboration with a Pharma manufacturing facility, we automated the monitoring of water systems’ quality and predicted what the anomalies might be in the future. Monitoring systems are one of the major industrial applications of Artificial Intelligence, as they can monitor multiple variables in real-time, enabling a more complex process control. This use case was particularly relevant because it involved a GxP use case. Therefore, the model had to be validated according to a risk-based framework that we developed from scratch.  This system has been deployed, validated, and in operation for nearly two years, and continues to perform as designed/intended with positive quality attributes.  

We are also using AI internally; for example, PQE Group has developed a use case based on AI to automatically process the management of all internal expenses, such as receipts and invoices. As you can imagine, as a consulting company, PQE Group must manage a significant volume of expenses, considering the number of both clients and employees. We use AI to manage both the computer vision aspects, which enables us to automatically read receipts, and complex classification processes. In this case, PQE used a Large Language Model (i.e., a solution “ChatGPT”-style) to classify the types of expenses, such as differentiating between a supermarket receipt, a dinner expense, or a lunch expense. It works quite well, and PQE will be launching extensions of this system to other business areas.  

With respect to the future prevalence of Artificial Intelligence in our lives, how much more might we see in the near future? 

Artificial Intelligence is being introduced at a research level, but it has not yet fully entered the production level because it is a highly regulated field. The European Union has introduced the AI Act, which represents the first worldwide example of an AI regulation. This means that we may have an impact on some applications that are classified with higher risk; an example is the use of AI for facial recognition, which may likely be prohibited in the future by this AI Act. While many use cases for AI have been presented thus far, some may still be allowed, such as simple algorithms (e.g., like Alexa) for tracking a person inside a home, where individual privacy and security are maintained. It is clear that a whole new world is emerging in the coming years. 

Will the European regulation, which is the first example of a worldwide AI regulation, limit or promote the development of AI? 

There is significant debate on this issue. There are clearly two schools of thought; some are more in favor of introducing certain boundaries because Artificial Intelligence is a tool that offers potential and can certainly improve and positively impact humanity in many ways. The benefits of using AI are visible to basically anyone. However, there are also strong risks involved. Prof. Stuart Russell, one of the greatest experts in AI, often cites this example: think of how our lives have changed in the last 20 years, since the introduction of recommendation systems on social networks, which involved very simple machine learning algorithms that proposed ad hoc advertisements and ad hoc content to you.  

For example, on social networks, individuals’ walls are populated with news selected by these algorithms. In the last 10-15 years, during which time social networks have greatly impacted society, we have seen a change in human behavior. This is scientifically proven: these algorithms actually work to maximize individuals’ interests. It has been found that, by influencing humans and their choices, people are more likely to click on a specific item because they are interested in a certain type of content. And so it was proven that these algorithms not only optimize the probability of clicks, but also as a result impacts the user’s choices. This result is already in motion, but not specifically accounted for. Therefore, we must learn to anticipate results, as well as set boundaries while taking design/engineering actions.  

PQE Group is working to create best practices to bring greater robustness to the development of AI algorithms. PQE consultants are highly qualified system engineers, with a deep understanding of AI/ML and cybersecurity protocols and systems. We are able to perform tool design, integrate components, instill good practices, as well as test the products’ functionality, which is challenging for AI algorithms. This has been our mission for years in the field of all computerized systems and, now, while the technical challenge is becoming much larger, PQE Group has specialized knowledge in AI/ML, as we are used to assessing and managing risks in these systems. This risk-based approach will need to be applied to all types of AI/ML systems within Life Sciences and beyond, and places PQE Group in a well-positioned category of expertise. 

Is the development of AI potentially dangerous for humanity in the future? 

In order to answer this question, we must first define what is referred to as “Artificial General Intelligence.” First of all, it is very difficult to define what intelligence is and what it isn't. Also, the famous Turing test nowadays is challenged by tools (e.g., ChatGPT), which hence might be considered intelligent in some sense. However, in practical terms, it is not very robust, and can inaccurately provide completely wrong answers to simple questions. The dangers lie in the creation of these black boxes where it is unclear what happens, and thus can interact with humans in harmful ways. Now, this doesn't necessarily mean they could take control of all of humanity, but as said before, even a simple algorithm managing a social network feed has already had a significant impact on our society. Imagine what a more complex algorithm could potentially do. The problem, then, is not so much about the models or algorithms themselves but how they are used in an uncontrolled mannerism, often extending beyond their original intended use. Therefore, it is of paramount importance that these algorithms must be controlled in some fashion for critical applications. 

 


 

For those who are keen to delve deeper into AI/ML and its interplay with cybersecurity in regulated environments, PQE Group's TEQ Talk, in collaboration with the US Chamber of Commerce, is hosting an insightful webinar entitled AI and Cybersecurity: the future is now! This is an opportunity not only to understand the current landscape but also to envision the future of AI and cybersecurity in such critical sectors.  

Join us on November 16, 2023, at 11:00 AM Eastern Time. Click below and register today!

Secure your seat!

 

Want to know more?

PQE Group staff comprises experienced and skilled experts in multidisciplinary teams, available to support your company achieving the highest levels of safety for your systems. Visit our Digital Governance services page to know more or to contact us, and find the most suitable solution for your company.

Connect with us