The impact of AI on daily life is becoming increasingly significant. Falsified, altered, and/or manipulated content is on the rise. Do you think AI detection tools will become as common as antivirus software, with everyone having one installed on their device to protect against misinformation?
Sort by:
AI tech is already showing up in cybersecurity tools. Noticeably, also the opposite is true. AI is used to identify documents that are AI generated. For example, most Universities in US are implementing those tools (anti-plagiarism, anti-AI generated), as the new school year starts...
AI detection , will evolve into a built-in capability, as a seamless component of browsers, messaging platforms, social media, and operating systems. Users won’t want another product to install or manage. They’ll expect trust to be embedded by default.
Excellent point! However, I don’t think all browsers, social networks, messaging platform and so on will want to adopt 'anti-deepfake' technologies. At the very least, some won’t adopt them precisely as a strategic positioning move in opposition to those that do. If this turns out to be true, then we won’t be able to trust any content and will need to adopt a personal tool… We’ll see what happens, but thank you for the valuable contributions.
In my view, AI detection will become common, but not as a separate tool like antivirus. I believe it will be built directly into browsers, messaging apps, social platforms and even operating systems as an extension to existing tools. I do not think people will want another product to manage, they will expect trust to be built in by default. I believe the future is not about installing another layer of protection, it is about making content authenticity checks an invisible part of the digital experience.
Excellent point! However, I don’t think all browsers, social networks, and so on will want to adopt 'anti-deepfake' technologies. At the very least, some won’t adopt them precisely as a strategic positioning move in opposition to those that do. If this turns out to be true, then we won’t be able to trust any content and will need to adopt a personal tool… We’ll see what happens, thank you for the valuable contributions.
I don’t think we’ll see an AI equivalent of AV, largely because the AI attack surface is more at the user level than the machine level.
I do think we’ll see (and are already seeing) AI detections at various layers where humans traditionally interact such as email, or in platforms and services they leverage. Additional training and awareness programs shoud assist.
Doubtful. I can see two issues as a start.
1. Scope. In order to create a standalone AI fake filter (that is effective) the scope of inputs can be an issue.
a. Individuals receive signals such as email, chat/instant messages, images, voice calls, video and live meetings. At present multimodal AI tools’ capabilities are limited and therefore accessing all forms of signal that an individual receives would be challenging at best.
b. Requiring an AI fake filter to screen everything coming in, for example to determine if a phone call is from a hacker or a bank seems useful initially, but logistically the tool would need to be a prefilter for all phone calls and emails and chat messages. The tool could become overwhelmed by evaluating the percentage likelihood of fakes since AI currently works on a statistical ranking scale.
c. It is also possible that one element of a communication will be real, but the other element will be fake, for example video where the words are dubbed in. Should the AI fake detector rule that communication is fake, real or a third category, mixed validity.
2. Privacy. In order to screen all incoming signals for AI falsehoods then the tool must have access to all communications or signals received by the person.
a. If the tool runs on local AI and there's potential of compromising that person’s computer and gaining access to signals which were protected or encrypted initially but decrypted and unprotected for the individual to access.
b. If the tool runs remotely then there's less risk of compromise at the person's workstation but all of their communications sensitive personal confidential and legally protected would need to be decrypted and then read or at least accessed by the external AI tool. An external fake filter would be a high value target for hackers.