By CorpGov Editorial Staff
Over the last year, artificial intelligence (“AI”) has become the hot topic of conversation. Recent breakthroughs in generative AI are driving this surge in popularity, giving AI systems the ability to generate human language, images, and even music. These new abilities herald a new wave of exciting applications. Possibilities include automated customer support, document drafting, tax planning, real time translation, and even AI-assisted computer programming. These applications would have been rejected as pure science fiction only a few years ago.
As companies race to incorporate these AI systems into their business, many are grappling with understanding the benefits and risks of the technology. In the same way companies realized decades ago that they faced IT risks — even though technology was not their core business — many are about to realize that they now face considerable risks from AI. And with the endless possibility of AI come new and unpredictable challenges. NIST’s AI Framework provides a roadmap to understand, evaluate, and manage risk from AI systems.
AI Risk Management Is Unique
While the opportunities created by AI systems are extraordinary, the risks presented by those systems differ from those presented by other software systems. For instance, many AI systems do not leave a clear audit trail. AI systems can seem like a black box, where understanding how an output is produced is difficult without careful management. These systems rely on hundreds of billions (if not trillions) of data points — far more than humans or even traditional software applications can process.