Why should organisations rethink cyber risk in the age of artificial intelligence?
Human error has long been one of the biggest contributors to cyber security incidents. Mis-sent emails, weak passwords and poor judgement under pressure continue to play a major role in data breaches across all sectors.
However, as artificial intelligence (AI) becomes embedded into everyday business tools, from email assistants and chatbots to coding platforms and decision-support systems, the nature of that risk is changing.
AI doesn’t remove human error. It accelerates it, scales it, and often hides it behind a false sense of confidence.
According to IBM’s Cost of a Data Breach Report 2025, human error is responsible for 26% of breaches, with IT failures accounting for a further 23%. As generative AI adoption rapidly increases, organisations now face a new reality: AI is amplifying the impact of everyday mistakes, and attackers are exploiting it faster than many businesses can respond.
This article explores how AI magnifies human risk, what this means for cyber security and compliance, and learn how ISO 27001 and ISO 42001 help organisations manage AI security and compliance with IMSM.
Human error in the AI era: what’s really changed?
Shadow AI and uncontrolled data exposure
One of the most significant emerging risks is the rise of “shadow AI”, where employees are using generative AI tools without formal approval, oversight or security controls.
Research shows a dramatic increase in sensitive business data being pasted into AI tools via personal accounts. This includes customer information, contracts, internal reports and even source code. Once data enters a public AI platform, control is effectively lost.
For organisations subject to GDPR, ISO 27001 or sector-specific regulations, this creates serious compliance and reputational risks. In many cases, employees are not acting maliciously, they simply don’t understand the implications of sharing data with AI tools.
Without clear AI governance, shadow AI quickly becomes a blind spot in your information security management system (ISMS).
Over-reliance on AI and misplaced confidence
Another growing concern is over-trust in AI outputs.
Multiple studies show that while AI can increase productivity, it often introduces hidden risks. Developers using AI coding assistants, for example, may produce code more quickly but not necessarily more securely. In fact, a significant proportion of AI-generated code samples contain security vulnerabilities.
The most dangerous factor is confidence. When AI presents information fluently and authoritatively, users are more likely to accept it without verification. This “automation bias” means errors can move faster through systems, remain undetected for longer, and become far more costly to fix.
From a cyber security perspective, speed without scrutiny is a risk multiplier. IMSM can talk you through how ISO 27001 improves and validates your cyber security standards, making your operations safe, secure, and compliant.
AI-powered phishing, deepfakes and social engineering
Phishing attacks have evolved and AI is accelerating that evolution.
AI-generated phishing emails are now:
- Perfectly written
- Highly personalised
- Free from the spelling and grammar errors employees were trained to spot
Attackers are also increasingly using deepfake audio and video to impersonate executives, suppliers or colleagues. AI-driven Business Email Compromise (BEC) attacks are becoming more convincing and harder to detect, especially in fast-moving or high-pressure environments.
When employees are overloaded and time-poor, even well-trained staff can be caught out. Protect yourself with the only global AI standard, ISO 42001.
Prompt injection is the “new phishing”
AI introduces entirely new attack surfaces, one of the most important being prompt injection.
Prompt injection occurs when hidden or malicious instructions are embedded in content that an AI system processes, causing it to leak data, override safeguards or produce harmful outputs. The user may not even realise anything has gone wrong.
The OWASP Top 10 for Large Language Models ranks prompt injection as the number one AI security risk, describing it as the next evolution of social engineering.
Just as organisations once had to train staff not to click suspicious links, they must now teach them to question AI outputs and understand how AI can be manipulated. Our ISO Specialists can help answer any questions about how ISO 42001 can help protect organisations from these growing concerns.
Can you reduce AI-driven cyber risk with a standards-based approach?
The good news is that these risks are manageable, provided organisations act early and take a structured approach.
International standards such as ISO 27001 (Information Security Management) and ISO 42001 (AI Management Systems), alongside the UK’s AI Cyber Security Code of Practice, provide clear guidance for controlling AI-related risk.
Key priorities for organisations
1. AI governance and policy
- Define approved AI tools and acceptable use cases
- Assign clear roles and responsibilities
- Integrate AI risk into existing ISMS and risk registers
2. Data loss prevention (DLP)
- Prevent sensitive data from being entered into AI tools
- Apply redaction, masking and watermarking by default
- Monitor and log AI-related data flows
3. Secure AI development and deployment
- Apply secure-by-design principles to AI systems
- Conduct threat modelling and risk assessments
- Scan AI-generated code and outputs before use
4. Human-centric training
- Update security awareness training to include:
- AI-enabled phishing
- Deepfakes
- Prompt injection
- Reinforce a “trust but verify” culture
- Get ISO 27001 and ISO 42001 certified
5. Oversight and verification
- Require additional approval for high-risk, AI-influenced decisions
- Maintain human accountability for AI-assisted actions
Why ISO standards matter more than ever
AI security is more than a technical issue, it now demonstrates as a governance, risk and compliance challenge.
ISO standards can offer essential stability:
- ISO 27001 helps organisations systematically manage information security risks, including those introduced by AI.
- ISO 42001, the world’s first AI management system standard, provides a framework for responsible, secure and ethical AI use.
Together, they enable organisations to:
- Demonstrate due diligence
- Reduce regulatory and reputational risk
- Build trust with customers, partners and regulators
- Scale AI safely and responsibly
At IMSM, we support organisations at every stage of their ISO journey, from gap analysis and implementation to certification and ongoing improvement.
Conclusion: AI doesn’t replace human error – it multiplies it
AI is reshaping the cyber threat landscape at speed. Shadow AI, insecure code, deepfake scams and prompt injection are no longer future risks but are happening now.
The organisations that succeed will be those that:
- Recognise AI as a business-wide risk, not just an IT issue
- Embed governance early
- Align with recognised standards such as ISO 27001 and ISO 42001
- Invest in both technology and people
Next step
If your organisation is already using AI (officially or unofficially) start by mapping where and how it is being used. From there, align your controls with ISO 27001, ISO 42001 and the UK AI Cyber Security Code of Practice to close the most critical gaps.
Frequently Asked Questions: AI, Human Error & Cyber Security
How does AI increase cyber security risk?
AI increases cyber security risk by accelerating human decision-making and amplifying mistakes. When employees over-trust AI outputs or use AI tools without governance, errors can spread faster, become harder to detect and lead to data breaches, insecure code or fraud.
What is “shadow AI” and why is it dangerous?
Shadow AI refers to the use of AI tools by employees without organisational approval or security controls. It is dangerous because sensitive or regulated data may be shared with public AI platforms, creating compliance, privacy and information security risks.
Can ISO 27001 help manage AI-related cyber risks?
Yes. ISO 27001 helps organisations identify, assess and manage information security risks, including those introduced by AI. It supports governance, access control, data protection and incident response for AI-enabled systems.
What is ISO 42001 and why is it important for AI governance?
ISO 42001 is the international standard for AI management systems. It helps organisations govern the safe, ethical and secure use of AI by addressing risk management, accountability, transparency and compliance throughout the AI lifecycle.
How does AI make phishing and fraud more effective?
AI enables attackers to create highly realistic phishing emails, deepfake voices and impersonation scams. These attacks are more convincing than traditional phishing, making it harder for employees to identify malicious activity.
What is prompt injection and why should businesses care?
Prompt injection is a technique where malicious instructions are hidden in content processed by AI systems. It can cause AI tools to leak sensitive data or bypass safeguards. OWASP ranks it as the top risk for large language models.
Is AI a replacement for human judgement in cyber security?
No. AI should support, not replace, human judgement. Over-reliance on AI can increase risk if outputs are not verified. Organisations should adopt a “trust but verify” approach and maintain human oversight for critical decisions.
What steps can organisations take to reduce AI-driven human error?
Key steps include:
- Defining approved AI use policies
- Implementing data loss prevention (DLP) controls
- Applying secure-by-design principles
- Training staff on AI-specific threats
- Using ISO 27001 and ISO 42001 as governance frameworks
Do small and medium-sized businesses need AI security governance?
Yes. AI-related risks affect organisations of all sizes. SMEs are often targeted because they lack formal controls. ISO standards provide scalable, practical frameworks suitable for small and medium-sized businesses.
How can IMSM help with AI and cyber security compliance?
IMSM supports organisations with ISO 27001 and ISO 42001 implementation, helping them manage AI-related cyber risks, meet regulatory expectations and demonstrate trust to customers and stakeholders.
Sources & further reading
- IBM Security – Cost of a Data Breach Report 2025
https://www.ibm.com/security/data-breach - Netskope – Cloud Threat Report
https://www.netskope.com/reports/cloud-threat-report - Veracode – State of Software Security
https://www.veracode.com/state-of-software-security - Apiiro – AI-Assisted Development Risk Research
https://www.apiiro.com/resources - UK National Cyber Security Centre (NCSC) – AI and Cyber Security
https://www.ncsc.gov.uk/collection/ai-and-cyber-security - Proofpoint – Threat Reports
https://www.proofpoint.com/uk/resources/threat-reports - OWASP – Top 10 for Large Language Models
https://genai.owasp.org/ - UK Government – AI Cyber Security Code of Practice
https://www.gov.uk/government/publications/ai-cyber-security-code-of-practice



