Businesses are already seeing major fallout with one real-world example being the CEO of a U.K.-based energy firm being fooled by a deepfake to think he was speaking to his boss who asked him to send funds to a Hungarian supplier.
IRVINE, Calif. (PRWEB) September 29, 2020
Since businesses rely on technology for communication, deepfakes—or synthetic media of false images and/or sound—pose a growing threat to their future strength, growth, security, and bottom line. That’s the belief and warning from Global IT Solutions Provider Technologent. “Businesses around the globe have already lost money, reputations, and hard-won brand strength due to deepfakes,” said Technologent Global Chief Information Security Officer Jon Mendoza. Leading studies show that deepfakes are the most worrisome aspects of AI due to the potential for criminal profit or gain, ease of implementation, and difficulty in stopping them.(1)
Deepfakes are most often videos or audio created with AI, deep learning (DL) , and machine learning (ML) technology. While artificial intelligence (AI) generally holds great promise, deepfakes are quickly evolving to threaten businesses in multiple ways, which requires a novel and agile approach to stop them. “That’s why companies must have an IT partner capable of implementing a plan that includes a mix of current and emerging technologies that can adapt to combat a growing deepfakes threat landscape,” continued Mendoza.
The Nature of Deepfakes
Deepfake technology manipulates images, videos, and voices of real people and alters the original to appear to be, do, or say something other than the original. This is done by feeding an ML model thousands of target images to create a deepfake algorithm that learns the details of a person’s face or voice.
The broader public has focused on the political ramifications of deep fakes, but its implications for business and across all aspects of public, private, and social life are vast. Facebook started the year by banning manipulated videos on their platform, but many say it doesn’t go far enough.(2)
How Deepfakes Impact Businesses
Businesses are already seeing major fallout with one real-world example being the CEO of a U.K.-based energy firm being fooled by a deepfake to think he was speaking to his boss who asked him to send funds to a Hungarian supplier.(3) This is one of a growing number of examples and methods showing how fraudsters can make a fortune off of deepfakes or even going as far as to destroy businesses and brands.
A major challenge for public and private sector entities is that deepfakes are relatively easy to make—and becoming easier.(4) Leading strategists have pointed out how deepfakes can have a major negative impact on both the public and private sector where video-conferencing in the work-from-home era has acclimated people to lower quality video.(5)
Businesses, processes, and communication are performed online with employees communicating, collaborating, and exchanging information digitally—oftentimes not securely. As the COVID-19 pandemic has accelerated the work-from home paradigm, deepfakes will certainly follow. Video, and audio requests from known superiors and coworkers are a major vulnerability that can have far-reaching implications for the business and the employee.
This problem goes beyond remote workers to the challenges that come from the impact of COVID-19 on in-person trade shows that are now virtual tradeshows. Entire industries could be impacted by deepfakes from a single bad actor that creates content to manipulate buyers, sellers, and product developers in ways that can cripple businesses, brands, and even entire industries.
Deep Fake Solutions
More and more, entities, organizations, and technology leaders have been developing ways to track, analyze, and more importantly provide solutions that can thwart deepfakes for businesses, organizations, government, and the public. That has included initiatives like The Content Authenticity Initiative (CAI) white paper. Its aim is to create industry-wide standards for digital authenticity verification to thwart deepfakes before they are acted upon.(6) Other initiatives like the Deepfake Detection Challenge in partnership with Microsoft and leading academics hope to combat deepfakes through employee training.(7)
Deepfakes and other offshoots of AI will require businesses to create even more agile and holistic security and detection approaches to protect devices, apps, data, and cloud services.
According to Mendoza, that starts with a zero-trust approach to all access attempts to people, data, systems, and applications:
“Businesses will need a partner that can be a single point of contact for creating and executing a holistic and well thought out plan including all endpoints and education of the workforce to combat deepfakes,” explained Mendoza. It’s critical that these technology and training solutions focused on proactive security are implemented in a holistic fashion to protect businesses as deepfakes become more sophisticated and widespread.”
Technologent is a Global Provider of Edge-to-EdgeTM Information Technology Solutions and Services for Fortune 1000 companies. They help companies outpace the new digital economy by creating IT environments that are fast, flexible, efficient, transparent and secure. Without these characteristics, companies will miss the opportunity to optimally scale. Technologent mobilizes the power of technology to turn vision into reality, enabling a focus on driving innovation, increasing productivity and outperforming the market. Visit http://www.technologent.com
1. University College London, “Deepfakes' ranked as most serious AI crime threat,” Science Daily, August 4, 2020, sciencedaily.com/releases/2020/08/200804085908.htm
2. Makena Kelly. “Facebook bans deepfake videos ahead of the 2020 election,” The Verge, January 7, 2020, theverge.com/2020/1/7/21054504/facebook-instagram-deepfake-ban-videos-nancy-pelosi-congress
3. Catherine Stupp. “Fraudsters Used AI to Mimic CEO’s Voice in Unusual Cybercrime Case,”Pro Cyber News, August 30, 2019, wsj.com/articles/fraudsters-use-ai-to-mimic-ceos-voice-in-unusual-cybercrime-case-11567157402
4. Patrick Tucker. “Deepfakes Are Getting Better, Easier to Make, and Cheaper,” Defense One, August 6, 2020, defenseone.com/technology/2020/08/deepfakes-are-getting-better-easier-make-and-cheaper/167536/
5. Patrick Tucker. “The Pentagon Is Using Zoom. Is it Safe?, Defense One, April 6, 2020, defenseone.com/technology/2020/04/pentagon-using-zoom-it-safe/164402/
6. Leonard Rosenthal et.al. “The Content Authenticity Initiative: Setting the Standard for Digital Content Attribution,” CAI, August 2020, documentcloud.adobe.com/link/track?uri=urn%3Aaaid%3Ascds%3AUS%3A2c6361d5-b8da-4aca-89bd-1ed66cd22d19
7. Ian Cruxton, “Phishing Today, Deepfakes Tomorrow: Training Employees to Spot This Emerging Threat,” Dark Reading, January 16, 2020, darkreading.com/risk/phishing-today-deepfakes-tomorrow-training-employees-to-spot-this-emerging-threat/a/d-id/1336778