Risk managers are becoming increasingly concerned with the rise in the use of artificial intelligence and believe global governments, including the UK, need to come together to create a regulatory system which will mitigate many of the risks they face.
The concerns come as the UK’s Department for Science, Innovation and Technology’s (DSIT) consultation period on its draft Code of Practice on AI cyber security comes to an end and just a fortnight after the European Union’s AI Act became law across the single market.
Given how rapidly AI has evolved and how it has been embedded, DSIT said it had developed the code based on the National Cyber Security Centre’s (NCSC) guidelines for secure AI system development, to ensure cyber security will underpin AI safety.
Governments worldwide are concerned with the threat posed by AI-driven disinformation during elections, and as such, many countries are looking to bring in new laws in an effort to control its use.
When launching the code, Viscount Camrose, former UK Minister for AI and Intellectual Property explained: “Artificial Intelligence (AI) is a vital technology for the UK economy and for supporting people’s…