Protect Your Organization From Disinformation Campaigns

Prevent, detect and respond to disinformation campaigns with disinformation security techniques and technologies.

Disinformation campaigns lie at the crossroads of GenAI and viral media

AI and machine learning tools don’t only empower organizations to automate their processes. They also equip malicious actors with powerful tools to create content that feeds disinformation campaigns — focused attacks aimed at deceiving, misleading or confusing a group of people.

Already a top global threat, disinformation campaigns have the potential to go viral on social media and lead to direct corporate losses from fraud, boycotts and reputational damage.

Download the Top 10 Strategic Technology Trends for 2025

Learn how disinformation security and other technology trends align with your digital ambitions. Plus, how to integrate them into your strategic planning for long-term success.

By clicking the "Continue" button, you are agreeing to the Gartner Terms of Use and Privacy Policy.

Contact Information

All fields are required.

Company/Organization Information

All fields are required.

Optional

Fight disinformation campaigns with dedicated techniques and security

Organizations are already suffering the effects of, and taking steps to combat, disinformation campaigns with dedicated techniques and technologies. Demand is growing so quickly that by 2028, 50% of enterprises will adopt products, services or features specifically to address disinformation security use cases, up from less than 5% in 2024.

The whys and hows of disinformation

Malicious actors have different goals for their disinformation campaigns. Some want to polarize a target audience. Others are hoping to steal customer information or disrupt business operations. Common tactics include:

  • Pushing out deepfakes

  • Spreading misinformation through social media and fake news websites

  • Using GenAI to create disinformation at scale — and spread it before organizations can respond

  • Crafting convincing phishing emails

  • Exploiting vulnerabilities in workforce collaboration tools and call centers

  • Leveraging malware to steal credentials

  • Initiating account takeovers

To fight these efforts, organizations need a holistic approach to lower risk, increase transparency and expand assurance capabilities.

Techniques and technologies to enable disinformation security

Disinformation security requires a cross-functional effort that unites technology, people and processes across executive leadership, security teams, public relations, marketing, finance, human resources, legal counsel and sales.

Alignment is critical because there are no silver-bullet technologies to fully secure any system or process. Instead, organizations should evaluate existing systems, workflows and controls for vulnerabilities related to disinformation attacks and then incorporate relevant mitigation features.

Examples of vulnerable processes and potential mitigations include the following:

Real-time communications. Protect workforce collaboration tools, call centers and mobile phones against synthetic media, like deepfakes.

Third-party platform content. Evaluate content originating outside of the organization for authenticity before taking action.

Claims validation. Monitor content submitted as evidence to support an application or claim for signs that it was artificially generated or manipulated using software tools.

Identity verification. Protect against attempts to bypass biometric authentication using synthetic media, and against combinations of presentation attacks and injection attacks.

Phishing mitigation. Monitor for convincing GenAI-crafted emails that accurately imitate a brand’s identity or key personnel tone.

Account takeover prevention. Prevent adversaries from leveraging malware to steal credentials, then bypassing authentication controls.

Brand impersonation. Scan for attackers impersonating brands to trick customers into performing harmful actions impacting goodwill and reputation.

Social/mass media monitoring. Look for influence operations aimed at swaying public sentiment using harmful narratives.

Deep/dark web monitoring. Monitor bad actors discussing targets and tactics and selling sensitive stolen data such as credentials and identities.

Sentiment manipulation. Protect automated tools from being used to deliver fake engagement.

Take action against disinformation campaigns

GenAI provides opportunities for attackers to establish a persistent presence in the enterprise and evade detection. These attacks usually combine fake identities and data with polymorphic techniques to constantly mutate and hide the original malicious algorithm. To take action:

  • Assess the new GenAI threat landscape continuously, and prepare for GenAI-borne attacks on IAM trust infrastructure (such as identity impersonation) that can potentially bypass existing IAM controls. Ensure that current identity verification tools can mitigate new threats from fake identities and fake data.

  • Differentiate between vendors by focusing on:

    • Features outside of the core identity verification process, such as modern data verification including verifiable credentials

    • Connectivity to data affirmation sources, such as identity graphs and government issuing authorities

  • Seek assurance from vendors on their defense against deepfake attacks. Request information on their current experience with the issue, their detection capabilities, and their investment in a forward-looking roadmap to stay abreast of the challenges that deepfakes pose. Be suspicious of any vendors that do not proactively provide you with this information.

Learn more about how Gartner works with technical teams to execute efficiently and drive business results.

Disinformation campaigns FAQs

What is an example of a disinformation campaign?

An example of a disinformation campaign is malicious actors attempting to manipulate public opinion at critical times through social media influence operations or fake news sites. For example, malicious actors could launch a disinformation campaign with the goal of influencing the outcomes of an election.


How can emerging technologies help enable disinformation security?

Enterprises use emerging disinformation security technologies for content authenticity, narrative intelligence, identity assurance, fraud prevention, fact checking and brand reputation.


How much money can businesses lose from disinformation campaigns?

Account takeovers are a common type of disinformation campaign. The strategy involves bad actors impersonating trusted users. Though takeovers of business email accounts accounted for only 2.4% of internet crime cases in 2023, the practice resulted in $2.9 billion in net business losses.

Drive stronger performance on your mission-critical priorities.