The UK government has released a report that claims AI could enhance the ability of threat actors to commit cyber and terrorist attacks.
The report, released in advance of the UK AI Safety Summit on November 1 and 2, highlights the need for greater regulation on the development of AI. But unless governance is put in place on an international scale, there is little to stop the malicious use of AI, a top expert has told TechRadar Pro.
“There are existential risks where AI stops taking instructions from human beings and it starts doing what it wants, and we become dispensable, so there is no guarantee we will survive because we would just be a puppet. And that’s the fear of it all,” Avivah Litan, VP and Distinguished Analyst at Gartner told us.
The case for greater regulation
AI has become increasingly available over the past few years with the release of ChatGPT and other generative AI software to the public, with many of us now using AI software to increase productivity and efficiency at work.
As generative AI becomes more advanced and further democratized through open-source development, there is an increasing risk that the lack of safeguards could…