The impact of AI on daily life is becoming increasingly significant.  Falsified, altered, and/or manipulated content is on the rise.  Do you think AI detection tools will become as common as antivirus software, with everyone having one installed on their device to protect against misinformation?

893 viewscircle icon1 Upvotecircle icon7 Comments
Sort by:
CISO/CPO & Adjunct Law Professor in Finance (non-banking)4 days ago

Doubtful. I can see two issues as a start.
1. Scope. In order to create a standalone AI fake filter (that is effective) the scope of inputs can be an issue.
a. Individuals receive signals such as email, chat/instant messages, images, voice calls, video and live meetings. At present multimodal AI tools’ capabilities are limited and therefore accessing all forms of signal that an individual receives would be challenging at best.
b. Requiring an AI fake filter to screen everything coming in, for example to determine if a phone call is from a hacker or a bank seems useful initially, but logistically the tool would need to be a prefilter for all phone calls and emails and chat messages. The tool could become overwhelmed by evaluating the percentage likelihood of fakes since AI currently works on a statistical ranking scale.
c. It is also possible that one element of a communication will be real, but the other element will be fake, for example video where the words are dubbed in. Should the AI fake detector rule that communication is fake, real or a third category, mixed validity.
2. Privacy. In order to screen all incoming signals for AI falsehoods then the tool must have access to all communications or signals received by the person.
a. If the tool runs on local AI and there's potential of compromising that person’s computer and gaining access to signals which were protected or encrypted initially but decrypted and unprotected for the individual to access.
b. If the tool runs remotely then there's less risk of compromise at the person's workstation but all of their communications sensitive personal confidential and legally protected would need to be decrypted and then read or at least accessed by the external AI tool. An external fake filter would be a high value target for hackers.

CISO in Manufacturing18 days ago

AI tech is already showing up in cybersecurity tools. Noticeably, also the opposite is true. AI is used to identify documents that are AI generated. For example, most Universities in US are implementing those tools (anti-plagiarism, anti-AI generated), as the new school year starts...

CISO| Legal & Regulatory APAC lead in Media19 days ago

AI detection , will evolve into a built-in capability, as a seamless component of browsers, messaging platforms, social media, and operating systems. Users won’t want another product to install or manage. They’ll expect trust to be embedded by default.

1 Reply
no title5 days ago

Excellent point! However, I don’t think all browsers, social networks, messaging platform and so on will want to adopt 'anti-deepfake' technologies. At the very least, some won’t adopt them precisely as a strategic positioning move in opposition to those that do. If this turns out to be true, then we won’t be able to trust any content and will need to adopt a personal tool… We’ll see what happens, but thank you for the valuable contributions.

Director of Engineering Security at Okta in Software21 days ago

In my view, AI detection will become common, but not as a separate tool like antivirus. I believe it will be built directly into browsers, messaging apps, social platforms and even operating systems as an extension to existing tools. I do not think people will want another product to manage, they will expect trust to be built in by default. I believe the future is not about installing another layer of protection, it is about making content authenticity checks an invisible part of the digital experience.

Lightbulb on2 circle icon1 Reply
no title5 days ago

Excellent point! However, I don’t think all browsers, social networks, and so on will want to adopt 'anti-deepfake' technologies. At the very least, some won’t adopt them precisely as a strategic positioning move in opposition to those that do. If this turns out to be true, then we won’t be able to trust any content and will need to adopt a personal tool… We’ll see what happens, thank you for the valuable contributions.

Lightbulb on1
Director of Operations in Transportation25 days ago

I don’t think we’ll see an AI equivalent of AV, largely because the AI attack surface is more at the user level than the machine level.
I do think we’ll see (and are already seeing) AI detections at various layers where humans traditionally interact such as email, or in platforms and services they leverage.  Additional training and awareness programs shoud assist.

Lightbulb on1

Content you might like

Coverage—AI claims full scan, but misses deep flaws36%

Speed—AI is fast but error-prone68%

Creativity—AI scripts can’t improvise12%

Integration—vendor tools don’t plug into DevSecOps24%

View Results

Significantly better19%

Somewhat better31%

About the same31%

Somewhat worse7%

Significantly worse

Have not used ChatGPT-512%

View Results