Balancing AI with insider risk management

0
177
web threats

AI has officially taken off, yet organizations are squarely divided on their use in the workplace.  Organizations that encourage the use of AI and Large Language Model (LLM) tools for productivity are willing to accept the security risk but often lack the policies, education, and controls required to mitigate potential security risks, including those posed by insiders.

On the other hand, companies that take a hard line against the tools by implementing strict rules against any installation or use of AI-LLMs may cause their employees to be less productive. Fortunately, there is a middle ground that balances productivity with security and, importantly, with insider risk management.

Workforce education is key

It pays for organizations to be proactive about their cybersecurity posture, as insider risks are getting increasingly costly. In fact, the 2023 Ponemon Cost of Insider Risks Global Report found that the total average annual cost of an insider risk incident rose from $15.4 million in 2022 to $16.2 million in 2023. Interestingly, the report found that 75 percent of insider incidents are non-malicious, arising either from negligence or from being outsmarted by an…

Подробнее…