Last week, social media channels were filled with the aged faces of celebrities, friends and family members alike. FaceApp, which uses AI and facial recognition to create highly realistic facial transformations, was the most downloaded app on the web.

Originally launched in 2017, the app surged last week as celebrities began taking the “FaceApp challenge” – creating a geriatric version of themselves – and millions of others followed.

But as quickly as they hype rose for FaceApp, panic began to set in. After users discovered a Russian developer was behind the App (with some dubious fine print), people immediately began speculating that the Russians were stealing our faces and pickpocketing the photos off our phones for some nefarious purpose.

Fortunately, it turned out those faces were actually sitting on servers inside Amazon data centers on U.S. soil (and being deleted after 48 hours), but the panic did raise a few worrisome questions in the public consciousness relative to facial recognition, AI and deepfakes. 

These technologies have seen a surge of advancement in the last few years, and there is understandable concern that they could be used to create compromising photos, videos, and authentications to bypass voice and facial recognition security protocols.

Deepfakes are, perhaps, the area of most concern. Deepfakes use deep learning AI to replace one person’s face or voice with another in videos. Last year, actor/director Jordan Peele demonstrated just how terrifyingly convincing deepfakes can be by face-swapping with former President Barack Obama in a video.

One common concern is that deepfakes could be used for the purpose of public sabotage – a growing concern in both the upcoming 2020 presidential elections and the private sector. Deepfake technology could be used to create fake “leaked” audio or video clips that could sink a stock, torpedo a product launch or create incriminating clips of CEOs or political candidates.

Public cons are one thing, but the threats grow much deeper on a 1-on-1 communication level for businesses when you consider their potential for phishing scams and bypassing security protocols.

Deepfake audio has already emerged as a growing threat to the private sector. According to cybersecurity firm Symantec, there have already been at least three incidents of deepfake audio attacks on private companies. In these cases, scammers impersonated the company’s CEO using AI programs that had been trained on hours of their speech – easily found on YouTube Videos, earnings calls and more.

The scammers used the AI altered speech to steal millions of dollars from each company by using the CEO’s voice to request an urgent funds transfer from a senior financial officer.

Using a similar approach, cybercriminals could also use deepfake technology to gain access to sensitive business networks through convincing phishing scams. This could, in turn, result in the theft of intellectual property, data sabotage or other sensitive data theft.

With the rise of 5G technology, the problem will only get worse. While programs like FaceApp upload a photo to a server and utilize the power of their own computer network to transform the images, the massive bandwidth of 5G will make this type of processing power available in the palm of your hand. As deepfake technology improves alongside this, the possible implications are downright terrifying.

DARPA, the Defense Department’s advanced research arm, is already developing forensics tools to detect fakes in the event of a national security threat, but these do little to combat the issue of invasion in the private sector, where bypassing a single gatekeeper could have dire consequences.

Companies are largely unprepared for these threats, and will need to increase their reliance on security protocols like multi-factor authentication and encrypted communications to combat these increasingly likely imposter scams.

For now, at least, the burden falls to the individual information gatekeepers to stay on high alert. Keep your IT departments abreast of any suspicious activity, and be sure to verify highly sensitive or unusual requests through multiple channels.

By the time you realize it’s a fake, it might already be too late.

What do you think about this new breed of cybersecurity threats? Have you taken measures to protect your company against threats like deepfakes? How should companies go about protecting themselves? We’d love to her what you think. Sound off on social media now and join the conversation.