How a Professional Risk Manager Views Threats Posed by AI

0
207

Runaway artificial intelligence has been a science fiction staple since the 1909 publication of E. M. Forster’s The Machine Stops, and it rose to widespread, serious attention 2023. The National Institute for Standards and Technology released its AI Risk Management Framework in January 2023. Other documents followed, including the Biden administration’s Oct. 30 executive order Safe, Secure, and Trustworthy Artificial Intelligence, and the next day, the Bletchley Declaration on AI Safety signed by 28 countries and the European Union.

As a professional risk manager, I found all these documents lacking. I see more appreciation for risk principles in fiction. In 1939, author Isaac Asimov got tired of reading stories about intelligent machines turning on their creators. He insisted that people smart enough to build intelligent robots wouldn’t be stupid enough to omit moral controls — basic overrides deep in the fundamental circuitry of all intelligent machines. Asimov’s first rule is: “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” Regardless of the AI’s goals, it is forbidden to violate this law.

Подробнее…