Cybersecurity and AI: Enabling Security While Managing Risk

Like many disruptive technologies that came before it, AI’s hype and promise is balanced by trepidation and risk. 

Cut through the hype to cut down on buyer’s remorse

Like any highly desirable high-ticket item, AI is not always what it’s built up to be. For example, although there is a proliferation of AI agents, providers tend to proclaim every new automation feature or tool an “AI agent.” Though some agents do use generative AI to plan and take autonomous actions, many simply create more confusion for cybersecurity leaders. The more you know, the better equipped you’ll be to avoid confusion while reaping the benefits of your purchase.

For stronger, faster AI outcomes, use Gartner’s proprietary AI Use Case Insights tool to explore, evaluate and prioritize over 500 proven AI use cases tailored to your industry.

AI in Cybersecurity: Minimize Risks and Maximize Impact

Discover strategies to balance innovation and risk, ensuring a secure, AI-powered future for your organization.

By clicking the "Continue" button, you are agreeing to the Gartner Terms of Use and Privacy Policy.

Contact Information

All fields are required.

Company/Organization Information

All fields are required.

Optional

With great risk comes great reward ─ and regulation

The EU AI Act will require cybersecurity leaders to have a deep understanding of the AI systems in use in their enterprise. This may be difficult because the number of AI systems embedded in enterprises have increased by an order of magnitude. Cybersecurity leaders should immediately start discovering and cataloging AI-enabled capabilities ahead of mandatory risk assessments.

Don’t expect an AI grail

GenAI’s initial hype tantalized many organizations into rushing in without much forethought. Such lack of planning is risky, because aside from a few odd cases, the reward never matches the hype. Often what follows are months of trial and error followed by a retroactive assessment, financial write-off and sometimes a sacrificial executive departure (depending on the size of the write-off). The larger impact comes later in the form of opportunities lost with the delayed rollout of generative capabilities.

GenAI hype is bound to bring on disillusions in the short term as external pressure to increase security operation productivity collide with low-maturity features and fragmented workflows.

Symptoms of ill-prepared GenAI integration will include:

  • A lack of metrics to measure GenAI benefits, along with premium prices for GenAI add-ons

  • Difficulty integrating AI assistants in collaboration workflows in the security operation teams or with a security operation provider

  • “Prompt fatigue”: Too many tools offering an interactive interface to query about threats and incidents

To counteract the distortion that comes with overblown AI claims, conduct roadmap planning. Factor in all possibilities, balancing cybersecurity realities with GenAI hopes:

  • Take a multiyear approach: Start with application security and security operations, then progressively integrate GenAI offerings when they augment security workflows.

  • Evaluate efficiency gains in tandem with the cost of GenAI implementations. Refine detection and productivity metrics to account for new GenAI cybersecurity features.

  • Prioritize AI augmentation of the workforce (not just task automation). Plan for changes in long-term skill requirements due to GenAI.

  • Account for privacy challenges and balance the expected benefits with the risks of adopting GenAI in security.

Face the risks by preparing your best defense

GenAI is just the latest in a series of technologies that have promised huge boosts in productivity fueled by automation of tasks. Past attempts to fully automate complex security activities have rarely been successful and can be a wasteful distraction.

Although there are benefits to using GenAI models and third-party large language models (LLM), there are also unique user risks that require new security practices. These fall into three categories:

  1. Content anomaly detection

    • Unacceptable or malicious use

    • Unmanaged enterprise content transmitted through prompts, compromising confidential data inputs

    • Hallucinations or inaccurate, illegal, copyright-infringing and unwanted or unintended outputs that compromise decision making or cause brand damage

  2. Data protection

    • Data leakage, integrity and compromised confidentiality of content and user data in hosted vendor environments

    • Inability to govern privacy and data protection policies in externally hosted environments

    • Difficulty conducting privacy impact assessments and complying with regional regulations due to the black box nature of third-party models

  3. AI application security

    • Adversarial prompting attacks, including business logic abuses and direct and indirect prompt injections

    • Vector database attacks

    • Hacker access to model states and parameters

Externally-hosted LLM and other GenAI models increase these risks, as enterprises cannot directly control their application processes and data handling and storage. However, there is also risk in on-premises models hosted by the enterprise — especially when security and risk controls are lacking. These three categories of risk confront users during runtime of AI applications and models.

Explore the possibilities with AI agents

The popularity of custom-built AI agents is introducing new attack surfaces and risks that demand enterprises adopt secure development and runtime security practices. As AI agents’ actions are based on a probabilistic model, they are, by nature, less predictable, making risk management less straightforward. To reap the benefits of AI agents while heading off the uncertainties, cybersecurity leaders should:

  • Perform AI agent discovery. Without knowing what agents are running, it is impossible to secure them. Identify AI agents that are either active but unused or built without permission and in conflict with enterprise security policies. If the organization already has a centralized AI governance platform that can inventory and manage AI projects or models, gain access for visibility into the AI initiatives already identified in the organization.

  • Enforce access control. Most agentic security failures will likely be caused by access control issues. Cybersecurity leaders must govern agents’ access to enterprise and third-party systems and resources by adopting certain RPA security principles (e.g., credential management). Provide AI agents with their own credentials for accessing systems and resources to ensure that credentials may be rotated often without affecting the human user experience.

  • Adapt your development life cycle. It’s not necessary to redesign the development life cycle for AI agents, but certain best practices are key. First, ensure version control for AI agent code and assign ownership for AI agent codes to ensure accountability. Second, never store secrets in code or source code repositories. Third, in addition to tracking vulnerabilities in the AI agent framework, evaluate whether the framework provides security for agent orchestration and security measures that make it easier for developers to build security logic into the app. Fourth, use secure, well-known coding best practices.

  • Enforce runtime controls. AI-agent-specific runtime security controls provide near-real-time controls against new forms of attacks targeting AI applications (e.g., prompt injections, jailbreaks).

Like it or not, regulation is coming

With new technologies — especially those like AI that represent high potential for risk — come new regulations designed to rein in potential damage and consequences. By 2030, 50% of the world’s population will be covered under modern AI regulations in one form or another.

Although the level of regulation varies by region, all enterprises with a global footprint must understand and comply with the laws of each land they do business in. The EU has been the most proactive and comprehensive in establishing AI regulations. The EU AI Act is a purpose-based (vs. principles-based) approach to regulating AI that attempts to balance AI value creation with a tiered risk approach to mitigating AI risk.

In the context of the EU AI Act:

  • The first tier (minimal risk) applies to uses of AI that pose little to no threat to fundamental rights or safety and are therefore mostly unregulated.

  • The second tier applies to AI that poses limited risk and is subject to transparency requirements.

  • The third tier addresses high AI risks that require stricter compliance measures, including mandatory risk assessments. The Act introduces requirements for cybersecurity that focus on adding resilience to high-risk AI systems.

  • Under the fourth risk tier, certain AI models are prohibited (e.g., AI systems for social scoring, behavioral manipulation or detecting emotions in schools). Deploying such systems is prohibited and triggers penalties of up to €35m or 7% of annual global revenue, whichever is higher.

Unlike the General Data Protection Regulation (GDPR), which passed in 2016 and came into effect all at once in 2018, the staggered enforcement of the Act encourages organizations to start working toward compliance in phases rather than racing to a single deadline. This also allows the market surveillance authorities that will regulate the Act to ramp up their support gradually.

Cybersecurity FAQs

What is an AI agent?

Gartner defines AI agents as “autonomous or semiautonomous software entities that use AI techniques to perceive, make decisions, take actions and achieve goals in their digital or physical environments.”


What is the EU AI Act?

The EU AI Act attempts to balance AI value creation with a tiered risk approach to AI risk mitigation. This approach follows a simple yet comprehensive formula. The Act excludes AI systems used for scientific research and in the military sector. It also imposes certain mandatory actions such as AI literacy training and labeling generative AI output. The AI Act categorizes AI systems into four risk levels, each subject to increasing degrees of scrutiny aligned with the impact on fundamental rights and safety of EU residents.

Attend a Conference

Accelerate growth with Gartner conferences

Gain exclusive insight on the latest trends, receive one-on-one guidance from a Gartner expert, network with a community of your peers and leave ready to tackle your mission-critical priorities.

Drive stronger performance on your mission-critical priorities.