We need to prepare for deepfakes and spear phishing at scale

Published on Aug 20, 2024

We need to prepare for deepfakes and spear phishing at scale

“Hey, it’s me, your boss. I need you to buy a $50 Amazon gift card for a client.” Odds are, if you’re in the workforce today, you’ve received a text or email that sounds something like this. Phishing is absolutely rampant in today’s digital environment—an estimated 3.4 billion malicious emails are sent every day—and it’s only becoming more prevalent. Estimates put phishing attack growth rates at around 150% year over year!

Phishing attacks drive much of the business of cybercrime, and business is booming. Arctic Wolf estimates cybercriminals generate $1.5 trillion in annual revenue—more than all the FAANG companies combined. As a VC firm investing in cybersecurity, we’ve been spending our time understanding the trends in phishing attacks, specifically how the new wave of generative AI tools are empowering cybercriminals to send more tailored attacks, and how the next wave of cybersecurity solutions is going to stop them.

Spear phishing at scale: Around the corner?

First, we need to make a clear distinction: phishing vs. spear phishing. Cybercrime enterprises work like any other business, and they have limited resources. When determining how to get a breach, they can take one of two paths:

  1. Go as wide as possible with a cheap product: Send millions of emails and texts with no personalization, and hope for someone to have their guard down.
  2. Go deep after just a few targets, but with a well-researched scam: Spend time to understand who might be responsible for the largest dollar amount, create believable synthetic digital personas, and use customized language in each outreach. This is “spear phishing.”

The two are quite different. On a per-email basis, spear phishing attacks are more than 500 times more effective at creating breaches. Over 65% of all cyber attacks involved a spear phishing campaign (per a 2019 Symantec report). But it takes a lot of time to research your target, and the research can’t be automated. Or can it?

With LLMs revolutionizing every industry, it’s no surprise that cybercriminals would bring them to bear as well. With millions of account takeovers (ATOs) under their belts, attackers are sitting on a goldmine of data about potential victims. Media content, chat logs, social graphs, and anything else they’ve scraped can get tossed into an LLM to provide context on who the target is and how to best attack them. Then it’s as simple as setting up an email campaign and letting it rip.

And while incredibly scalable and believable email scams are certainly on the horizon, there’s another topic that’s on the mind of CISOs.

Deepfake audio and video: Vishing and more

Deepfakes are on the minds of many, especially with election season upon us. And while your average AI voice of Biden on a TikTok meme is clearly fake, the technology for incredibly convincing AI voice clones of everyday people has long since been in use, both for innocuous purposes and malicious ones. In working on this piece I played around with one voice cloning tool, Eleven Labs, and using just 90 seconds of source data, I made a bot of SignalFire’s CEO Chris Farmer that was quite convincing.

The advent of voice as a new medium for scammers further opens the floodgates for potential attacks, but I can imagine one positive: to the average person, the idea of being duped by a loved one’s cloned voice is deeply unsettling and may put them on high alert to wake up to modern security risks. The average employee might not get so shaken by an email phishing campaign that they’ll pay attention to security awareness training, but there’s something viscerally terrifying about being scammed by a deepfake voice (or eventually deepfake video) that gets people’s ears perked up.

There are already some startups focused on detecting voice phishing (“vishing”) in real time in the enterprise space, but as with all new cybersecurity threats, the speed advantage (and the laws of economics) favors the attackers. It wouldn’t surprise us to see an exponential increase in multimodal attacks, wherein the bad guys use both vishing and spear phishing emails to create incredibly robust attacks. Importantly, while the tools to target a small number of folks at the executive level are nothing new, AI is unleashing the ability for these complex, sophisticated attacks to be waged on individuals up and down the org chart.

So how can businesses best prepare to be swamped by these threats in the future?

The good guys strike back

Of course, there are the obvious ways that businesses have been fending off many phishing attacks for decades: spam filters, flagging malicious links, alerting users about unusual email addresses, and so forth. But a strong spear phisher will use a whitelisted address, share only TLS-encrypted links, and often fool email clients’ heuristic flagging mechanisms. What’s left between the attacker and the $100,000 or more that they’ll demand in ransom when they get access to your accounts and lock down your business? You.

Humans are the most exploited vector of attack in a modern business’s cyber risk posture. But they are also the best line of defense we have. We need to re-engage humans in the process of cybersecurity. This is not exactly earth-shattering news, and the same answer echoes each time we come to this topic: “It’s easier said than done. My users just don’t care that much about cybersecurity.”

This is where the innovation comes in, and where we at SignalFire are looking to back founders solving this problem. Some areas where we are focused are:

  • Risk-adaptive security training: Not everyone needs to watch hours of training videos or constantly be reminded about security. Many people are behaving perfectly safely online and we can stay out of their hair. But the 5% of employees who are clicking on sketchy links or who receive the most inbound messages from outside the organization could use a few extra prods now and then. The issue is, when CISOs roll out “one-size-fits-all” solutions, users get fatigued, they stop paying attention, and the whole issue is moot. We need an identity-based security awareness solution that helps those who need it, and leaves alone those who don’t. 
  • Unified human-centric communications security: Proofpoint and Mimecast have been around for more than 20 years. Most email filtering systems are even older. We need email security vendors built for the cloud, and built for an attacker universe that changes at the speed of AI. New players are making quite a bit of headway in this space, with some being particularly notable for building open communities of security professionals sharing new phishing threats as soon as they see them. But critically, workplace communication has long since evolved past simple email; it encompasses Teams, Slack, in some cases WhatsApp and SMS, and telephone calls. To get a true picture of risks to the enterprise through its workforce, we believe the email security systems of the past need to give way to unified communications security.
  • Greater identity controls and segmentation: We’ve written at length about “Defense in Depth,” but human-centric security has been slower to adopt this philosophy than, say, data security. The front line is awareness training, email security, vishing security, and the like. But at some point, no matter what we do, someone will slip up and click on a sketchy link. That’s where improved access controls downstream like improvements in the IAM space and network segmentation to limit lateral movements / blast radii come into play.

We’ve spoken in the past about decentralized cybersecurity and why it matters: the easiest way for the delicate system within an organization to come crashing down is any one user dropping their guard to a phishing attack. We can do better in preparing our workforce for these threats, and keeping them out of our inboxes.

Building a startup relevant to this theme? Disagree wildly with our takes? Reach out to me directly at t@signalfire.com and let’s chat.

*Portfolio company founders listed above have not received any compensation for this feedback and did not invest in a SignalFire fund. Please refer to our disclosures page for additional disclosures.

Related posts

From zero to hero: Reinforcement learning insights from the Ai4 conference
Advice
August 26, 2024

From zero to hero: Reinforcement learning insights from the Ai4 conference

We’ve earmarked $50M for the SignalFire AI Lab to provide the resources, capital, and credibility to help tomorrow’s AI leaders today.
We need to prepare for deepfakes and spear phishing at scale
Advice
August 20, 2024

We need to prepare for deepfakes and spear phishing at scale

We’ve earmarked $50M for the SignalFire AI Lab to provide the resources, capital, and credibility to help tomorrow’s AI leaders today.
Scaling yourself: Habits of successful founders
Advice
July 25, 2024

Scaling yourself: Habits of successful founders

We’ve earmarked $50M for the SignalFire AI Lab to provide the resources, capital, and credibility to help tomorrow’s AI leaders today.