Almost 60% do not treat AI ethical risks separately: Airmic survey

0
326

Nearly 60% of organisations are not treating ethical risks related to artificial intelligence (AI) separately from other ethical concerns, according to a recent survey conducted by Airmic, a UK association for risk and insurance professionals.

artificial-intelligenceIn another poll asking whether these organisations believe AI’s ethical risks should be treated separately, respondents were almost evenly divided.

As organisations rapidly integrate AI applications into their frameworks, concerns about associated ethical risks remain largely uncharted territory. Consequently, respondents deemed it sensible to give these risks extra visibility and attention within their risk management frameworks and processes.

This trend coincides with increasing calls for organisations to establish AI ethics committees and develop separate AI risk assessment frameworks to navigate contentious ethical situations.

Julia Graham, CEO of Airmic, emphasises, “The ethical risks of AI are not yet well understood and additional attention could be spent understanding them, although in time, our members expect these risks to be considered alongside other ethical risks.”

Advertise here

Hoe-Yeong Loke, Head of Research at Airmic, explains, “There is a…

Подробнее…

Актуальные книги на английском