The risks of deep fake tech and how to avoid falling for it
If you listen to Supasorn Suwajanakorn, aside from thinking what a mouthful his name is, you’ll notice just how insightful he is. By studying the world’s history and expressing it in an interesting and interactive manner, he hopes to help us forge a better future devoid of past mistakes.
He’s also the super brain behind an interesting piece of tech publicly known as ‘deep fake’.
Supasorn set out to create realistic holograms of holocaust survivors that, combined with artificial intelligence (AI), would give students of history an experience that mimics talking to an actual holocaust survivor.
SEE ALSO :Self-driving robots deliver drinks to visitors
He hoped this would preserve the authenticity of the holocaust tragedy and relay the seriousness of the narration while passing on critical lessons.
Towards this endeavour, Supasorn created a set of algorithms that can generate an animated three-dimensional face model of a person based on just a series of photos and videos.
Armed with a handful of photos, this meant that he could in theory make a video of anyone saying anything.
For More of This and Other Stories, Grab Your Copy of the Standard Newspaper.
And with time, this scenario has played out. Because the tech is available, there have already been viral videos with the likenesses of Facebook founder Mark Zuckerberg and US House Speaker Nancy Peloski.
Potentially, a deep fake video could go as far as having a sitting president appear to declare war on another nation, or on a more personal level, your face could appear to make a video call ending things with your significant other. Scary, right?
Lucky for the world though, the potential risks in deep fakes have caught the attention of high-level security agencies and tech businesses across the world.
Supasorn and team also recognised the dark side of this technology and its consequences. For instance, if the public becomes more disbelieving of videos, they’ll eventually stop being trusted as evidence.
Supasorn is currently working on a counter measure innovation in conjunction with the AI Foundation – a start-up founded in 2017 whose main aim is to build tools to protect against the risks of AI.
The project, Reality Defender, aims will be to help the ordinary person identify deep fake videos and avoid falling for AI manipulation.
In essence, Reality Defender is a web browser plug-in that scans all the images and videos that you come across online, and flags potentially fake or AI-generated content in real time as you browse.
The second way you can spot fake videos is through a web browser plug-in created by a pair of UC Berkeley students, Ash Bhat and Rohan Phadte.
It’s called Surf Safe and is available for download on most major browser extension platforms.
The other countermeasure is information. By exposing as many people as possible to the reality of deep fakes, we can make people more critical about what they watch on screen so they can avoid dangerous manipulation.
We are undertaking a survey to help us improve our content for you. This will only take 1 minute of your time, please give us your feedback by clicking HERE. All responses will be confidential.
artificial intelligenceSurf Safe