Skip to Content
Risk Management

Limiting the Impact of Deepfake Technology

Malicious intent and financial gain are the driving factors for cybercriminals using deepfake technology in their latest attacks. With these events on the rise, now is the time to review your organization’s procedural and cyber policy.

July 2, 2024

Within its short lifespan, deepfake technology has quickly shifted from amusing to dangerous and threat attempts can vary by topic. Attacks have included the politically motivated, such as the fake presidential robocalls urging voters to sit out of the upcoming New Hampshire primary elections. One of the most remarkable instances was financially motivated, in which a finance employee for a Hong Kong company was conned into paying out $25 million after attending a video conference call where all of his colleagues were deepfaked.

“As deepfake-related cyber incidents become more prolific, it is imperative that organizations know the limitations of their insurance policies and have set proper risk management controls to help prevent an attack,” said Jeremy Schumacher, Director of Cyber Underwriting at Safety National. “A limited number of cyber risk insurance carriers have released endorsements, clarifying affirmative coverage for losses stemming from a deepfake action.”

Here, we outline procedural and policy components that may help combat the use of deepfake technology in your organization.

Insurance Policy Language

Organizations should be hyper-vigilant of their policy limits and language that may be impacted by a deepfake incident. A few notable elements of a cyber insurance policy that may come into play in the event of a deepfake-related incident include:

  • Loss of funds – Most cyber policies will sublimit this coverage, with higher limits available via a standalone crime policy. Consider whether you have sufficient limits in place for such coverage.
  • Fraudulent instruction – Covers loss resulting from financial transfer instructions given by a person posing as a legitimate employee, vendor, or any person authorized to provide such instructions. The devil is in the details as to whether your policy would cover a deepfake incident. Be sure to engage a reputable insurance broker who has considered deep fake exposures with other clients in your industry.
  • Callback requirements / out-of-band authentication – To best thwart cybercriminals’ attempts at a fraudulent transfer, policies may require that changes to financial transaction details, such as accounts and routing information, be validated via two different pre-determined methods before enacting the requested change.

Controls

Healthy cyber hygiene and controls can help protect an organization if they are a target of a deepfake-related cyber incident. A few of the best practices to consider implementing include:

  • Multi-factor authentication – Widespread use of MFA can add an extra layer of protection, especially for particularly sensitive data networks.
  • Callback techniques – As previously noted, when change requests for financial transactions occur, companies should require a set of callback procedures to verify an individual’s identity.
  • User education – Take the time to train and engage your employees on spam, phishing, malware, ransomware, and social engineering so they understand how to identify a threat. Making your employees aware of security threats and how they might present themselves strengthens the most vulnerable components of your organization. Be sure to start with employees who handle funds transfers or access management, such as your accounting team and IT help desk, respectively.
  • PR response plan – Managing reputational harm can be difficult, but the right plan may keep damage to a minimum. A strong incident response plan should involve at least annual tabletop exercises to frequently test the crisis management, incident response and business continuity playbooks that should be in place.

Social Media and Websites

To support recruiting tactics and display company culture, a social media team may not be aware that they are oversharing potentially useful information for a cybercriminal’s nefarious goals. Employee photos and videos could be used for deepfake creation, so the social media team needs to be cognizant of what they share and post on social media and their website. Your organization’s online presence is likely necessary, but take the time to determine the risks versus the rewards. Additionally, the social media team should actively monitor comments or message boards on videos, posts, or blogs to avoid any reputational risk.