Prevent, detect and respond to disinformation campaigns with disinformation security techniques and technologies.
Prevent, detect and respond to disinformation campaigns with disinformation security techniques and technologies.
By Dan Ayoub | September 25, 2024
This page features a previous edition of Gartner’s Top Strategic Technology Trends. For the most up-to-date insights, explore the Gartner Top 10 Strategic Technology Trends for 2026.
AI and machine learning tools don’t only empower organizations to automate their processes. They also equip malicious actors with powerful tools to create content that feeds disinformation campaigns — focused attacks aimed at deceiving, misleading or confusing a group of people.
Already a top global threat, disinformation campaigns have the potential to go viral on social media and lead to direct corporate losses from fraud, boycotts and reputational damage.
Organizations are already suffering the effects of, and taking steps to combat, disinformation campaigns with dedicated techniques and technologies. Demand is growing so quickly that by 2028, 50% of enterprises will adopt products, services or features specifically to address disinformation security use cases, up from less than 5% in 2024.
Malicious actors have different goals for their disinformation campaigns. Some want to polarize a target audience. Others are hoping to steal customer information or disrupt business operations. Common tactics include:
Pushing out deepfakes
Spreading misinformation through social media and fake news websites
Using GenAI to create disinformation at scale — and spread it before organizations can respond
Crafting convincing phishing emails
Exploiting vulnerabilities in workforce collaboration tools and call centers
Leveraging malware to steal credentials
Initiating account takeovers
To fight these efforts, organizations need a holistic approach to lower risk, increase transparency and expand assurance capabilities.
Disinformation security requires a cross-functional effort that unites technology, people and processes across executive leadership, security teams, public relations, marketing, finance, human resources, legal counsel and sales.
Alignment is critical because there are no silver-bullet technologies to fully secure any system or process. Instead, organizations should evaluate existing systems, workflows and controls for vulnerabilities related to disinformation attacks and then incorporate relevant mitigation features.
Examples of vulnerable processes and potential mitigations include the following:
Real-time communications. Protect workforce collaboration tools, call centers and mobile phones against synthetic media, like deepfakes.
Third-party platform content. Evaluate content originating outside of the organization for authenticity before taking action.
Claims validation. Monitor content submitted as evidence to support an application or claim for signs that it was artificially generated or manipulated using software tools.
Identity verification. Protect against attempts to bypass biometric authentication using synthetic media, and against combinations of presentation attacks and injection attacks.
Phishing mitigation. Monitor for convincing GenAI-crafted emails that accurately imitate a brand’s identity or key personnel tone.
Account takeover prevention. Prevent adversaries from leveraging malware to steal credentials, then bypassing authentication controls.
Brand impersonation. Scan for attackers impersonating brands to trick customers into performing harmful actions impacting goodwill and reputation.
Social/mass media monitoring. Look for influence operations aimed at swaying public sentiment using harmful narratives.
Deep/dark web monitoring. Monitor bad actors discussing targets and tactics and selling sensitive stolen data such as credentials and identities.
Sentiment manipulation. Protect automated tools from being used to deliver fake engagement.
GenAI provides opportunities for attackers to establish a persistent presence in the enterprise and evade detection. These attacks usually combine fake identities and data with polymorphic techniques to constantly mutate and hide the original malicious algorithm. To take action:
Assess the new GenAI threat landscape continuously, and prepare for GenAI-borne attacks on IAM trust infrastructure (such as identity impersonation) that can potentially bypass existing IAM controls. Ensure that current identity verification tools can mitigate new threats from fake identities and fake data.
Differentiate between vendors by focusing on:
Features outside of the core identity verification process, such as modern data verification including verifiable credentials
Connectivity to data affirmation sources, such as identity graphs and government issuing authorities
Seek assurance from vendors on their defense against deepfake attacks. Request information on their current experience with the issue, their detection capabilities, and their investment in a forward-looking roadmap to stay abreast of the challenges that deepfakes pose. Be suspicious of any vendors that do not proactively provide you with this information.
Read the Planning Guide for Cybersecurity Architects to access action plans for disinformation security and other key technology trends impacting cybersecurity.
Learn more about how Gartner works with technical teams to execute efficiently and drive business results.
See how your peers are navigating AI adoption, vendor decisions and evolving business demands — with tools tailored to your role:
Explore our resources for midsize enterprises
Check out a curated list of Gartner’s most popular research being utilized by your peers
An example of a disinformation campaign is malicious actors attempting to manipulate public opinion at critical times through social media influence operations or fake news sites. For example, malicious actors could launch a disinformation campaign with the goal of influencing the outcomes of an election.
Enterprises use emerging disinformation security technologies for content authenticity, narrative intelligence, identity assurance, fraud prevention, fact checking and brand reputation.
Account takeovers are a common type of disinformation campaign. The strategy involves bad actors impersonating trusted users. Though takeovers of business email accounts accounted for only 2.4% of internet crime cases in 2023, the practice resulted in $2.9 billion in net business losses.
Attend a Conference
Experience IT Security and Risk Management conferences
With exclusive insight from Gartner experts on the latest trends, sessions curated for your role and unmatched peer networking, Gartner conferences help you accelerate your priorities.
Gartner Identity & Access Management Summit
Grapevine, TX
Drive stronger performance on your mission-critical priorities.