[Gartner SRM Summit 2025] Future of Security: Generative AI, a Double-Edged Sword of Opportunity and Threat
- Kenneth Nam
- Jun 14
- 5 min read

Generative AI: Innovation and Risk Shaping Security’s Future
Generative AI (Gen AI) is rapidly gaining traction in security. About 16% of organizations have deployed Gen AI in production security use cases, and another 20% are in pilot phases. Though still early, the technology’s promise is immense, but so are the new risks it introduces. This article explores Gen AI’s security innovations, key dangers, and technical countermeasures.
1. Innovative Uses of Generative AI
Gen AI is already applied across multiple security domains, with growing potential:
Code Analysis: used by 22% of companies
Vulnerability Detection & Remediation: used by 21%
User Behavior Analytics: used by 20%
Threat Hunting & Modeling: used by 18%
Incident Response: used by 16%

Emerging innovative technologies: According to Gartner’s Tech Radar, three stand out: Multimodal Gen AI, Agentic Gen AI, and Intelligent Simulation.
Multimodal Generative AI:
Processes inputs in multiple formats such as video, datasets, and code in addition to text, and produces outputs in various media.
Attackers already use multimodal Gen AI toolkits. This technology can generate diverse security artifacts, from interactive data visualizations and attack-timeline videos to expanded security dashboards and concise text summaries.
Agentic Generative AI:
Represents a new form of security orchestration and automation: autonomous AI “agents” that accept varied inputs and carry out tasks without fixed scripts.
Use cases include coordinating multiple agents for threat hunting and streamlining new-hire onboarding.
In security operations:
Incident Analysis: Automates Level 1 and 2 queries—code scanning, log parsing, dynamic query generation—and recommends action plans so human analysts can focus on higher-value work.
Incident Response: Automatically reissues credentials, blacklists compromised accounts and forces session terminations.
Exposure Management: Identifies and validates vulnerabilities, then provides remediation advice.
Intelligent Simulation:
Derived from the manufacturing “digital twin” concept, this technology creates a virtual environment mirroring your real infrastructure to run penetration tests, perform vulnerability assessments, and validate the effectiveness of new security tools.
2. Risks of Generative AI and Attacker Use
Generative AI offers powerful capabilities for security, but it also introduces new dangers. The most significant risks today are data loss and faulty or malicious outputs.
Data Loss Risks
Prompt-based leakage: Employees may inadvertently feed sensitive code samples or project details into public large language models (LLMs).
Exfiltration of information: If an attacker gains access to an internal Gen AI tool (e.g., Copilot), it may be treated as an insider and used to harvest accessible data or leverage non-sensitive information to escalate privileges.
Unauthorized retrieval: Users might accidentally query and retrieve sensitive data they shouldn’t see, driving organizations to invest heavily in data-store protections and access controls before deploying Gen AI.
Bypassing Guardrails
Tools like “Best Men Jill Brig” can evade public LLM safety filters about 60% of the time, enabling attackers to generate malicious code or phishing content.Guardrails are the automated policy-enforcement barriers that prevent AI from stepping outside preset rules.
Attacker Use of Generative AI:
Malware development: LLMs can write sophisticated malicious code—and commercial models have been proven to bypass built-in safety filters.
Phishing and misinformation: Tools like “Berm GTP” use LLMs to craft highly persuasive phishing messages, deepfakes, fake news articles or modified websites to damage corporate reputation.
Automated vulnerability scanning and exploitation: In white-hat tests, autonomous LLM agents found and exploited vulnerabilities on 9 of 10 target websites, demonstrating AI’s potential to dramatically boost attacker productivity.
New risks from agentic AI: Autonomous AI agents can reconfigure security environments or issue unauthorized credentials. If their training data is breached, they can leak vast amounts of sensitive information—making risk management exponentially harder as architectures and processes grow more complex.

3. Technical Countermeasures for Generative AI Risks
To address these dangers, Gartner highlights two complementary strategies: AI TRiSM (Trust, Risk and Security Management) and Cyber Deterrence.
AI TRiSM Framework
Infrastructure Hardening
Secure the compute, storage and network layers that host LLMs since these models run on your existing infrastructure stack
Data Governance
Synthetic Data: Train models on artificially generated datasets that mimic real data without exposing sensitive information
Homomorphic Encryption: Enable computation on encrypted data so you can process inputs while keeping them confidential
AI Runtime Inspection
Place input filters on LLM prompt interfaces acting like a firewall or web application firewall to detect and block malicious or out-of-policy requests such as guardrail bypass or denial of service attempts
AI Governance Tools
Discover test and continuously monitor every LLM deployment to ensure consistent patching version control and policy enforcement
Application and Data Security and Privacy
Apply traditional security best practices including access controls encryption at rest and in transit and audit logging across all AI components and data flows

Cyber Deterrence and Preemptive Cybersecurity Capabilities:
Traditional cybersecurity assumes attacks will occur and focuses on responding to alerts. Cyber deterrence aims instead to change attacker behavior before an attack even begins.
The goal shifts from handling alerts to reducing the number of alerts by preventing malicious activity in the first place.
By undermining attacker profitability, exposing their tools and tactics, disrupting their operations or denying their access, it becomes far more difficult and costly for them to carry out an attack.
Key use cases include:
Automated moving-target defense: dynamically rotate IP addresses or alter user-interface elements so attackers cannot reliably map your environment
Automated exposure management and attack simulation: use automated tools to continuously assess your environment and predict likely attacker paths
Predictive threat intelligence: monitor activities such as LinkedIn scraping or targeted messages to development teams in order to spot reconnaissance before exploitation
Advanced cyber deception: deploy decoys, honeypots, fake files and dummy user identities to confuse adversaries and trigger early detection
Effective defense: this preemptive security approach is considered the only strategy capable of keeping pace with AI-driven attacks, which outstrip traditional human-centric reactive methods.

Recommendations
Generative AI offers groundbreaking opportunities in security but also introduces new types of risk. To respond effectively, organizations should begin these preparations:
Form a specialist team: Establish a dedicated group of experts to assess and manage the risks associated with Generative AI.
Strengthen existing security posture: Before deploying any Gen AI tools, bolster internal defenses, secure data repositories and enforce strict access controls.
Advocate for preemptive security features: Engage vendors early to ensure their AI products include proactive security capabilities by design.
Ultimately, defending against the velocity of Gen AI-driven attacks requires a comprehensive security strategy. Continuous research and up-to-date intelligence are essential to deepen understanding of the evolving threat landscape.
Conclusion: Generative AI as a Double-Edged Sword
Generative AI (Gen AI) brings revolutionary potential to security, yet it carries equal measures of opportunity and peril. Recognizing both its bright and dark sides, and building an all-encompassing security framework to harness its benefits while mitigating its risks, is absolutely critical.
Author - Kenneth Nam, Threat Analyst | PAGO Networks