Artificial intelligence and cybersecurity: strategic convergence or a new high-risk grey zone?

Artificial intelligence is now embedded in cybersecurity systems—an engine of efficiency, but also a growing source of instability. As CISOs are called on to anticipate, arbitrate, and govern, this convergence demands constant vigilance. Here’s a look at the key challenges, and how to regain control of transformation rather than be overwhelmed by it.
In less than five years, AI has become inseparable from cybersecurity. Artificial intelligence (AI) has rapidly become a cornerstone of modern cybersecurity. A powerful detection tool and an emerging threat vector, AI is redefining the role of the CISO—who must now balance technological promises with new blind spots.
AI at the core of security operations: toward predictive, automated detection
Modern monitoring solutions—XDR, SIEM, NDR—now incorporate machine learning models that can prioritize alerts, identify anomalous behaviors, and accelerate root cause analysis.
In the most advanced Security Operations Centers (SOCs), certain platforms even trigger automated response scenarios (dynamic playbooks) based on predictive analytics: machine isolation, network quarantine, or enriched ticket generation for CERT teams.
This level of automation enables organizations to scale their response to threats—provided it rests on solid foundations:
- High-quality data collection (logs, netflow, EDR, etc.)
- Robust, documented, and interpretable models (supervised, unsupervised, hybrid)
- Seamless integration with operational and human workflows
Absent these safeguards, AI can produce excessive operational noise (false positives), or worse, obscure critical weak signals—especially in slow-burning attacks like living-off-the-land or data exfiltration operations.
Cybercriminals are leveraging AI, too
APT groups and other cybercriminal actors have already embraced AI at scale. From generating hyper-personalized phishing content to deploying polymorphic malware and scraping data for social engineering attacks, generative models are now part of their arsenal.
AI models themselves are also becoming targets.
Some attackers attempt to inject malicious prompts into natural language processing pipelines. Others poison training datasets—particularly within MLOps pipelines—or exploit inference mechanisms to extract sensitive information by reverse-engineering model behavior.
The MITRE ATLAS framework now documents these threats, classifying them by impact: evasion, exfiltration, influence, or performance degradation.
Meanwhile, regulatory scrutiny is tightening. The upcoming EU AI Act will impose heightened transparency requirements on “high-risk systems,” including mandates for logging, auditability, and explainability of automated decisions.
A new mandate for the CISO
In this rapidly evolving landscape, AI governance is becoming a core CISO responsibility. It’s no longer just about securing a tool, but about understanding and managing the organization’s full dependency on AI technologies:
- Mapping dependencies on third-party models (external LLM APIs, embedded cloud services)
- Managing model review processes (validation, performance metrics, concept drift)
- Documenting automated decision-making in incident response workflows
This expanded scope also demands a new skill set for security operations teams: the ability to interpret model outputs, detect anomalous behavior in AI, and correlate AI-generated alerts with traditional threat-hunting techniques.
Securing inference pipelines is becoming mission-critical—just as managing the indirect exposure of models to internal sensitive data is gaining urgency.
Security by design: a prerequisite for responsible AI
To ensure AI doesn’t become a chaos multiplier, organizations must adopt a secure-by-design approach. That means never placing blind trust in model outputs, enforcing human oversight on sensitive decisions, and implementing continuous monitoring for model performance and behavioral drift.
Crucially, systems must offer explainability and rollback mechanisms for when models go wrong. Yet such transparency remains elusive with certain deep learning architectures, whose internal logic is still largely opaque.
Several frameworks provide useful foundations for a secure approach to AI:
- The NIST AI Risk Management Framework
- The ENISA recommendations on AI and cybersecurity
- Internal governance guidelines aligned with sector-specific risks
However, adoption of these frameworks remains highly uneven across organizations.
Restoring control to security leaders
AI is both a powerful ally in threat detection and a driver of growing complexity. For CISOs, the goal is not to take sides “for or against” AI—but to regain control over how it is integrated.
By establishing clear governance principles, enhancing system auditability, and upskilling teams to collaborate effectively with AI, organizations can turn a volatile risk into a long-term competitive edge.
Further Reading
- MITRE ATLAS
- ENISA Threat Landscape 2024 – AI & Cybersecurity section
- NIST AI Risk Management Framework
- Gartner – “Market Guide for AI in Security Operations” (2024)
- Statista – “AI in Cybersecurity: Market Size 2024–2030”
