The Current State of AI and Governmental Initiatives Being Taken to Address Associated Risks

by: Luca Zanotti Fragonara - Advanced Technologies Competence Centre Lead @PQE Group; Gaurav Walia - Principal SME of Computer Systems Validation, Computer Software Assurance and Data Integrity and Sr. Associate Partner @PQE Group

In our last article on AI, The EU AI Act and the New Scenario for a Global AI Regulation, PQE Group discussed the dramatic growth of AI and how the world is responding. Shortly before the article was written, the European Union introduced the AI Act, which represents the first worldwide example of an AI Regulation. On November 1-2, 2023, an important summit – the AI Safety Summit – was held in the UK. Presidents and other leaders of large countries from around the world, along with CEOs of major organizations, attended this Summit to discuss AI and how to maintain safety and manage the risks associated with it.   And on November 3, 2023, US President Joe Biden issued a landmark Executive Order “to ensure that America leads the way in seizing the promise and managing the risks of artificial intelligence (AI).”  This new Executive Order tasks the federal government with developing standards for AI safety and security, and requires that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. Government. These measures are aimed to ensure AI systems are safe, secure, and trustworthy before companies make them public.  The Executive Order also requires the development of standards, tools, and tests to help insure that AI systems are safe, secure and trustworthy before they are made public.  

The Current State of AI_Site Banner

The National Institute for Standards and Technology (NIST) has issued a document, the AI Risk Management Framework, which is linked to President Biden’s Executive Order. Released earlier this year and mandated by Congress, the Risk Management Framework explains, in detail, how to manage risks associated with AI. NIST will set the rigorous standards for testing, and the Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. Homeland Security and the Department of Energy will address AI systems’ threats to critical infrastructure. A Chief AI Officer will be assigned to every branch of US Government to protect human rights and safety, which of course, includes pharmaceutical products and medical devices. As an example, to advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs, the Department of Health and Human Services will establish a safety program to receive reports of – and act to remedy – harmful or unsafe healthcare practices involving AI. And on November 2 of this year, the Department of Defense, which will be a major player in this arena, issued its AI adoption strategy. 

The Executive Order cites the AI Bill of Rights, which explicitly mentions source material for the Federal Government to look to along with this risk management framework. But Federal agencies will be required to develop risk management procedures to protect things like civil rights. It also specifically mentions reducing barriers to effective deployment of AI, which means looking at things such as high skilled talent that could be used by the Federal Government to fully leverage artificial intelligence. 

In a recent PQE TEQ Talk which included PQE Group CSV and US Chamber of Commerce AI & Cybersecurity experts, the topics discussed encompassed the current state of AI and Cybersecurity (and cyber threats) as well as current initiatives being undertaken by US and other countries’ regulators.  These experts discussed the fact that organizations have been using AI to help them detect anomalous behavior on networks, defend their networks, and defend their clients. But Chat GPT and other similar platforms have created a huge amount of new content. This is likely one of the things that we're going to be dealing with over the next few years; AI tools like Chat GPT have become more available which adds another dimension to the opportunities but also brings tremendous potential for risk. The Executive Order puts more limitations and more requirements around what is known as large or foundational models of artificial intelligence. There will also be new reporting requirements within the Executive Order, especially when it comes to AI and two other areas, one of which is in the nuclear space in the energy sector and the other is for companies and developers that are using these kinds of large language models that the White House has indicated could potentially become a threat: could AI be used to develop biological weapons? The executive order also asks independent agencies and tasks executive branch agencies with developing guidelines and rules to protect consumers and individuals against privacy violations, which are considered a harm by the White House, and to prevent discrimination. 

With the attention AI is receiving from leaders around the world, it is clear that mandatory regulations to ensure the safety, security and trustworthy practices for AI will continue to intensify.  PQE Group’s knowledgeable SMEs have expertise in the GXP and CSV sectors, with strong capabilities in managing risks that occur in technology systems as well as significant experience in helping companies ensure their systems are secure. PQE Group can support your organization’s efforts to comply with these requirements to ensure your systems are safeguarded and protected. 


This article contains content discussed at the recent PQE Group TEQ Talk AI and Cybersecurity: the future is now! This TEQ Talk featured speakers Jordan Crenshaw, Senior Vice President of the Chamber Technology Engagement Center, and Matthew Eggers, Vice President of Cybersecurity Policy Cyber, Intelligence, and Security Division, both with the US Chamber of Commerce, and Gaurav Walia, Sr. Associate Partner and Principle SME of Computer Systems Validation, Computer Software Assurance and Data Integrity with PQE Group. Moderating the TEQ Talk was Robert Perks, Senior Director of Business Development with PQE Group. 

Want to know more?

PQE Group staff comprises experienced and skilled experts in multidisciplinary teams, available to support your company achieving the highest levels of safety for your systems. Visit our Digital Governance services page to know more or to contact us, and find the most suitable solution for your company.

Connect with us