Blog
Recent
bg
LastPass Labs

Attempted Audio Deepfake Call Targets LastPass Employee

Mike KosakApril 10, 2024
Attempted Audio Deepfake Call Targets LastPass Employee

For the last several years, the cyber threat intelligence community has been concerned about the larger proliferation of “deepfake” technology and its potential use by fraudsters against companies and/or individuals. “Deepfakes” use generative artificial intelligence to leverage existing audio and/or visual samples to create a new and unique recording of a targeted individual saying or doing whatever the creator has programmed the deepfake tool to fabricate. Deepfakes are often associated with political misinformation and disinformation campaigns, but the combination of the increased quality of deepfakes and the increased availability of the technology used to create them (there are now numerous sites and apps openly available that allow just about anyone to easily create a deepfake) has long been a concern of the private sector as well. In fact, as early as 2019, a UK-based company reportedly fell victim to an audio deepfake in which an employee was convinced to transfer money to a fraudster who used  voice generating-AI software to impersonate the company’s CEO. More recently, a finance worker at a Hong Kong-based multinational company was duped into transferring $25 million to fraudsters after they set up a video call in which every other person participating, including someone impersonating the company’s Chief Finance Officer, was a video deepfake.

Screen capture displaying the WhatsApp attempted contact using deepfake audio as part of a CEO impersonation

While reports of these sorts of deepfake calls targeting private companies are luckily still rare, LastPass itself experienced a deepfake attempt earlier today that we are sharing with the larger community to help raise awareness that this tactic is spreading and all companies should be on the alert. In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp. As the attempted communication was outside of normal business communication channels and due to the employee’s suspicion regarding the presence of many of the hallmarks of a social engineering attempt (such as forced urgency), our employee rightly ignored the messages and reported the incident to our internal security team so that we could take steps to both mitigate the threat and raise awareness of the tactic both internally and externally.

To be clear, there was no impact to our company. However, we did want to share this incident to raise awareness that deepfakes are increasingly not only the purview of sophisticated nation-state threat actors and are increasingly being leveraged for executive impersonation fraud campaigns. Impressing the importance of verifying potentially suspicious contacts by individuals claiming to be with your company through established and approved internal communications channels is an important lesson to take away from this attempt. In addition to this blog post, we are already working closely with our intelligence sharing partners and other cybersecurity companies to make them aware of this tactic to help organizations stay one step ahead of the fraudsters.