Photo by Philipp Katzenberger on Unsplash

Defend against phishing attacks with more than user training. Measure users’ suspicion levels along with cognitive and behavioral factors, then build a risk index and use the information to better protect those who are most vulnerable.

 

 

As Russian tanks creaked into Ukraine, CEOs and IT managers throughout the United States and much of the free world started sending out emails warning their employees about impending spear-phishing attacks.

It made sense: Spear-phishing was what Russians had used on Ukrainians many times in the past half of a decade, such as when they shut down the country’s electrical grid on one of its coldest winter nights. It was also what the Russians had used against the Democratic National Committee and targets across the US.

At one end, the email missives from CEOs were refreshing. People were serious about the threat of phishing, which wasn’t the case in 2014 when I started warning about its dangers on CNN.

At the other end, it was sobering. There wasn’t much else organizations had figured out to do.

Sending messages to warn people was what AOL’s CEO resorted to back in 1997, when spear-phishing first emerged and got its name. Budding hackers of the time were impersonating AOL administrators and fishing for subscribers’ personal information. That was almost three decades ago, many lifetimes in Internet years.

In the interim, organizations have spent billions on security technologies and countless hours in security training. For context, a decade ago, Bank of America (BoA) was spending $400 million on cybersecurity. It now spends $1 billion per year on it. Yet thousands of its customer accounts in California were hacked last year.

And BoA isn’t alone. This year, Microsoft, Nvidia, Samsung, LG, and T-Mobile — which recently paid out a $350 million settlement to customers because of a breach in 2021 — were hacked. All fell victim to spear-phishing attacks. No question that the employees in these companies are experienced and well-trained in detecting such attacks.

Photo by King’s Church International on Unsplash

Flawed Approach

Clearly, something is fundamentally flawed in our approach, when you consider that after all this, email-based compromises increased by 35% in 2021, and American businesses lost over $2.4 billion due to it.

A big part of the problem is the current paradigm of user training. It primarily revolves around some form of cyber-safety instruction, usually following a mock phishing email test. The tests are sent periodically, and user failures are tracked — serving as an indicator of user vulnerability and forming the backbone of cyber-risk computations used by insurers and policymakers.

There is limited scientific support for this form of training. Most point to short-term value, with its effects wearing off within hours, according to a 2013 study. This has been ignored since the very inception of awareness as a solution.

There is another problem. Security awareness isn’t a solution; it’s a product with an ecosystem of deep-pocketed vendors pushing for it. There is legislation and federal policy mandating it, some stemming from lobbying by training organizations, making it necessary for every organization to implement it and users to endure it.

Finally, there is no valid measurement of security awareness. Who needs it? What type? And how much is enough? There are no answers to these questions.

Instead, the focus is on whether users fail a phishing test without a diagnosis of the why — the reason behind the failures. Because of this, phishing attacks continue, and organizations have no idea why. Which is why our best defense has been to send out email warnings to users.

Defend With Fundamentals

The only way to defend against phishing is to start at the fundamentals. Begin with the key question: What makes users vulnerable to phishing?

The science of security already provides the answers. It has identified specific mind-level or cognitive factors and behavioral habits that cause user vulnerability. Cognitive factors include cyber-risk beliefs — ideas we hold in our minds about online risk, such as how safe it might be to open a PDF document versus a Word document, or how a certain mobile OS might offer better protection for opening emails. Many such beliefs, some flawed and others accurate, govern how much mental attention we pay to details online.

Many of us also acquire media habits, from opening every incoming message to rituals such as checking emails and feeds the moment we awake. Some of these are conditioned by apps; others by organizational IT policy. They lead to mindless reactions to emails that increase phishing vulnerability.

There is another, largely ignored, factor: suspicion. It is that unease when encountering something; that sense that something is off. It almost always leads to information seeking and, armed with the right types of knowledge or experience, leads to deception-detection and correction.

It did for the former head of the FBI. Robert Muller, after entering his banking information in response to an email request, stopped before hitting Send. Something didn’t seem right. In the momentary return to reason caused by suspicion, he realized he was being phished, and changed his banking passwords.

By measuring suspicion along with the cognitive and behavioral factors leading to phishing vulnerability, organizations can diagnose what makes users vulnerable. This information can be quantified and converted into a risk index, with which they can identify those most at risk, the weakest links, and protect them better.

Doing this will help us defend users based on a diagnosis of what they need, rather than a training approach that’s being sold as a solution — a paradigm that we know doesn’t work.

After billions spent, our best approach remains sending out email warnings about incoming attacks. Surely, we can do better. By applying the science of security, we can. And we must — because spear-phishing presents a clear and present danger to the Internet.

*A version of this article appeared here: https://www.darkreading.com/vulnerabilities-threats/time-to-change-our-flawed-approach-to-security-awareness