Skip to main content
main content, press tab to continue
Article | Managing Risk

How to help your business avoid AI pitfalls

June 9, 2025

AI can introduce data leaks and cyber risks you may not always understand. Getting tough on new tools, use cases and shadow data helps protect your business while taking advantage of AI.
Corporate Risk Tools and Technology|Cyber-Risk-Management-and-Insurance|Enterprise Risk Management Consulting|Risk and Analytics|Risk Management Consulting
Artificial Intelligence

As AI and generative AI become more integrated into business processes, the pressure is on risk and cybersecurity professionals to shield organizations from data compromise or damage to operations.

In this insight, the first of our How to… cybersecurity series, our cyber risk analytics consultants share practical steps and the strategic focus your organization needs to improve cybersecurity and respond effectively to AI-related breaches, while still taking advantage of AI opportunities.

Get tough on new AI tools and uses

Your IT staff may be facing daily requests to enable new AI tools. Some of these may have weak security controls or an unclear use case. Like any new technology you’re thinking of introducing into your IT environment, you’ll need to vet each new instance and use case, setting out and documenting clearly defined usage parameters.

This is an area where cybersecurity can help build valuable collaborations between key players such as the CIO, CISO, data officer, risk manager and legal colleagues. Together, these business functions can review each new AI tool, set usage guidelines and disseminate information to the rest of your organization.

Industry-specific analytics can help you assess and quantify the risks associated with third-party AI tools and their potential impact on your business. By quantifying the likelihood and impact of AI-driven cyber risks, you can inform more effective and efficient decision-making on which tools to use and how to mitigate associated risks. Periodic technology rationalization assessments can also help determine if each AI tool in use is still necessary.

Check your data (and shadow data)

Data is the lifeblood of AI. But while data makes AI run, it also represents the greatest potential vulnerability. So-called ‘shadow data’ is data that falls outside of defined repositories in your environment. It's an endemic problem in many organizations, contributing to data leakage and associated regulatory compliance risk.

As you introduce AI to your business, you need to know the answers to the following questions on data:

  • Do you have a named data officer?
  • Do you have a published data strategy?
  • Do you know where all your data is?
  • Who, or what, has access to your data, including APIs and AI tools?

There are many useful frameworks covering data and AI governance you can turn to for initial guidance, including the US National Institute of Science and Technology’s Artificial Intelligence Risk Management Framework, ISO/IEC 42001, the European Union’s AI Act and the UK government’s AI Regulatory Principles.

Secure system prompts and critical components of AI models

Recent cyber incidents such as DeepSeek tell us that securely managing system prompts can help prevent unauthorized access. Your business needs to implement strong access controls, ensuring only authorized personnel have access to system prompts and critical components. Use multi-factor authentication (MFA) and role-based access control (RBAC) to minimize the risks.

By encrypting all sensitive data, including system prompts, you can add a further layer of security, making it more difficult for attackers to exploit vulnerabilities.

Commit to continuous monitoring and rapid response

Regularly auditing and reviewing your AI systems, such as access logs, to monitor for unusual activity, puts you in a good position to identify and address security issues.

Create and empower a dedicated team to respond to security incidents, ensuring they have clear triggers for taking action. Train this team to quickly identify, isolate and address vulnerabilities, giving clear protocols for communicating with other departments and external stakeholders.

You should also conduct regular security assessments, audits and penetration testing to make sure your systems are resilient against emerging AI threats.

Review and update your insurance to cover AI-driven threats

Emerging cyber threats mean you need to take a fresh look at your insurance coverage. Check you are covered for security failures, privacy breaches and business interruption losses by conducting gap analyses to identify and close any shortfalls.

And if you are hit by an AI-related breach, be transparent

Thinking back to the DeepSeek incident, there’s one further lesson to bear in mind: if an AI-related breach impacts you, it’s better to be transparent and acknowledge the impact of any incident on your customers and supply chain partners.

Be ready to publicly communicate the steps you’re taking to mitigate the impact to maintain stakeholder trust. In the event of a security incident, communicate quickly and clearly with your users, providing them with detailed information about the incident, the steps you’re taking to address it and any actions they should take to protect themselves.

You should also be ready to provide regular updates throughout the incident response process, which reduces the risk of misinformation.

You can check the adequacy of your cybersecurity, AI governance and insurance coverage with Cyber Quantified.

For a deeper dive into your cyber risks and controls, get in touch with our cyber risk consulting specialists.

Contact


Seb Benford
Risk & Analytics
email Email

Contact us