Obama Spoof Video Demonstrates Power and Danger of Latest AI Tech

Obama Spoof Video Demonstrates Power and Danger of Latest AI Tech

Recent developments in the latest technology in the world of AI hold huge promise. However, in the immortal words of Spiderman, ‘with great powers come great responsibility’. This week a spoof ‘DeepFake’ video produced by online media Buzzfeed, Monkeypaw Productions and actor and director Jordan Peele that uses AI software freely available online simultaneously demonstrated some of the amazing things AI can accomplish and inherent dangers in the technology.

The video was produced in an effort to highlight how DeepFake AI videos can be used in the proliferation of ‘fake news’ used to manipulate the general public and push agendas. It appears to show an address by former US President Barrack Obama, in which he refers to his successor Donald Trump as a ‘total and complete dips**t’. In fact, as the video goes on to show, the clip uses AI software to expertly merge genuine Obama footage almost undetectably with Peele impersonating his voice. The AI blends Peele’s mouth and voice with the Obama video.

Buzzfeed reported that it took them 56 hours to make the video with one of Monkeypaw Production’s professional video editors. It raises awareness that while using AI software to make a very convincing fake video does take some time and expertise, it is certainly achievable. Until now, the potential of the AI technology used in the Obama spoof has been misused for the unethical but relatively non world-changing practise of creating fake pornography featuring celebrities. However, the fear is how it could potentially be nefariously used in the kind of viral ‘fake news’ campaigns recently used to manipulate public sentiment.

Recent AI developments mean that in the near future it won’t take professional video editing expertise to create highly realistic fake video montages of this kind. In an effort to combat the potential damage the technology could do if used with malicious intent, AI experts are now working on software to spot fakes. This would allow social media networks such as Facebook or YouTube to automate the process of spotting fake videos and taking them down.

In the meanwhile, Buzzfeed suggests several ways in which AI-powered fake footage can be distinguished from the real deal. If in doubt, you should check the source of a video and where it is published. Fakes are unlikely to be produced or republished by reputable, established news and information outlets. Paying close attention to the subject’s mouth can also help distinguish a DeepFake from genuine footage. While the AI software is advanced, it still doesn’t perfectly render the ‘teeth, tongue and mouth interior’ so visual anomalies can be spotted. Slowing down or freeze framing a video can help with this.

Risk Warning:

This article is for information purposes only.

Please remember that financial investments may rise or fall and past performance does not guarantee future performance in respect of income or capital growth; you may not get back the amount you invested.

There is no obligation to purchase anything but, if you decide to do so, you are strongly advised to consult a professional adviser before making any investment decisions.

Leave a Comment

two × 3 =