The National Institute for Standards and Technology (NIST) has issued a document, the AI Risk Management Framework, which is linked to President Biden’s Executive Order. Released earlier this year and mandated by Congress, the Risk Management Framework explains, in detail, how to manage risks associated with AI. NIST will set the rigorous standards for testing, and the Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. Homeland Security and the Department of Energy will address AI systems’ threats to critical infrastructure. A Chief AI Officer will be assigned to every branch of US Government to protect human rights and safety, which of course, includes pharmaceutical products and medical devices. As an example, to advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs, the Department of Health and Human Services will establish a safety program to receive reports of – and act to remedy – harmful or unsafe healthcare practices involving AI. And on November 2 of this year, the Department of Defense, which will be a major player in this arena, issued its AI adoption strategy.
The Executive Order cites the AI Bill of Rights, which explicitly mentions source material for the Federal Government to look to along with this risk management framework. But Federal agencies will be required to develop risk management procedures to protect things like civil rights. It also specifically mentions reducing barriers to effective deployment of AI, which means looking at things such as high skilled talent that could be used by the Federal Government to fully leverage artificial intelligence.
In a recent PQE TEQ Talk which included PQE Group CSV and US Chamber of Commerce AI & Cybersecurity experts, the topics discussed encompassed the current state of AI and Cybersecurity (and cyber threats) as well as current initiatives being undertaken by US and other countries’ regulators. These experts discussed the fact that organizations have been using AI to help them detect anomalous behavior on networks, defend their networks, and defend their clients. But Chat GPT and other similar platforms have created a huge amount of new content. This is likely one of the things that we're going to be dealing with over the next few years; AI tools like Chat GPT have become more available which adds another dimension to the opportunities but also brings tremendous potential for risk. The Executive Order puts more limitations and more requirements around what is known as large or foundational models of artificial intelligence. There will also be new reporting requirements within the Executive Order, especially when it comes to AI and two other areas, one of which is in the nuclear space in the energy sector and the other is for companies and developers that are using these kinds of large language models that the White House has indicated could potentially become a threat: could AI be used to develop biological weapons? The executive order also asks independent agencies and tasks executive branch agencies with developing guidelines and rules to protect consumers and individuals against privacy violations, which are considered a harm by the White House, and to prevent discrimination.
With the attention AI is receiving from leaders around the world, it is clear that mandatory regulations to ensure the safety, security and trustworthy practices for AI will continue to intensify. PQE Group’s knowledgeable SMEs have expertise in the GXP and CSV sectors, with strong capabilities in managing risks that occur in technology systems as well as significant experience in helping companies ensure their systems are secure. PQE Group can support your organization’s efforts to comply with these requirements to ensure your systems are safeguarded and protected.
This article contains content discussed at the recent PQE Group TEQ Talk AI and Cybersecurity: the future is now! This TEQ Talk featured speakers Jordan Crenshaw, Senior Vice President of the Chamber Technology Engagement Center, and Matthew Eggers, Vice President of Cybersecurity Policy Cyber, Intelligence, and Security Division, both with the US Chamber of Commerce, and Gaurav Walia, Sr. Associate Partner and Principle SME of Computer Systems Validation, Computer Software Assurance and Data Integrity with PQE Group. Moderating the TEQ Talk was Robert Perks, Senior Director of Business Development with PQE Group.