MonthFebruary 2023

Build Security Around Users: A Human-First Approach to Cyber Resilience [Published in Dark Reading]

Photo by Giorgio Trovato on Unsplash

Security is more like a seat belt than a technical challenge. It’s time for developers to shift away from a product-first mentality and craft defenses that are built around user behaviors.

Technology designers begin by building a product and testing it on users. The product comes first; user input is used to confirm its viability and improve upon it. The approach makes sense. McDonald’s and Starbucks do the same. People can’t imagine new products, just like they can’t imagine recipes, without experiencing them.

But the paradigm also has been extended to the design of security technologies, where we build programs for user protection and then ask users to apply them. And this doesn’t make sense.

Security isn’t a conceptual idea. People already use email, already browse the Web, use social media, and share files and images. Security is an improvement that is layered over something users already do when sending emails, browsing, and sharing online. It’s similar to asking people to wear a seat belt.

Time to Look at Security Differently

Our approach to security, though, is like teaching driver safety while ignoring how people drive. Doing this all but ensures that users either blindly adopt something, believing it’s better, or on the flip side, when forced, merely comply with it. Either way, the outcomes are suboptimal.

Take the case of VPN software. These are heavily promoted to users as a must-have security and data-protection tool, but most have limited to no validity. They put users who believe in their protections at greater risk, not to mention that users take more risks, believing in such protections. Also, consider the security awareness training that is now mandated by many organizations. Those who find the training to be irrelevant to their specific use cases find workarounds, often leading to nonenumerable security risks.

There’s a reason for all this. Most security processes are designed by engineers with a background in developing technology products. They approach security as a technical challenge. Users are just another action into the system, no different than software and hardware that can be programmed to perform predictable functions. The goal is to contain actions based on a predefined template of what inputs are suitable, so that the outcomes become predictable. None of this is premised on what the user needs, but instead reflects a programming agenda set out in advance.

Examples of this can be found in the security functions programmed into much of today’s software. Take email apps, some of which allow users to check an incoming email’s source header, an important layer of information that can reveal a sender’s identity, while others don’t. Or take mobile browsers, where, again, some allow users to check the SSL certificate quality while others don’t, even though users have the same needs across browsers. It’s not like someone needs to verify SSL or the source header only when they’re on a specific app. What these differences reflect is each programming group’s distinct view of how their product should be used by the user — a product-first mentality.

Users purchase, install, or comply with security requirements believing that the developers of different security technologies deliver what they promise — which is why some users are even more cavalier in their online actions while using such technologies.

Time for a User-First Security Approach

It’s imperative that we invert the security paradigm — put users first, and then build defense around them. This is not only because we must protect people but also because, by fostering a false sense of protection, we’re fomenting risk and making them more vulnerable. Organizations also need this to control costs. Even as the economies of the world have teetered from pandemics and wars, organizational security spending in the past decade has increased geometrically.

User-first security must begin with an understanding of how people use computing technology. We have to ask: What is it that makes users vulnerable to hacking via email, messaging, social media, browsing, file sharing?

We have to disentangle the basis for risk and locate its behavioral, cerebral, and technical roots. This has been the information that developers have long ignored as they built their security products, which is why even the most security-minded companies still get breached.

Pay Attention to Online Behavior

Many of these questions have already been answered. The science of security has explained what makes users vulnerable to social engineering. Because social engineering targets a variety of online actions, the knowledge can be applied to explain a wide swath of behaviors.

Among the factors identified are cyber-risk beliefs — ideas users hold in their mind about the risk of online actions, and cognitive processing strategies — how users cognitively address information, which dictates the amount of focused attention users pay to information when online. Another set of factors are media habits and ritualsthat are partly influenced by the types of devices and partly by organizational norms. Together, beliefs, processing styles, and habits influence whether a piece of online communication — email, message, webpage, text — triggers suspicion.

Photo by Dex Ezekiel on Unsplash

Train, Measure, and Track User Suspicions

Suspicion is that unease when encountering something, the sense that something is off. It almost always leads to information seeking and, if a person is armed with the right types of knowledge or experience, leads to deception-detection and correction. By measuring suspicion along with the cognitive and behavioral factors leading to phishing vulnerability, organizations can diagnose what made users vulnerable. This information can be quantified and converted into a risk index which they can use to identify those most at risk — the weakest links — and protect them better.

By capturing these factors, we can track how users get co-opted through various attacks, understand why they get deceived, and develop solutions to mitigate it. We can craft solutions around the problem as experienced by end users. We can do away with security mandates, and replace them with solutions that are relevant to users.

After billions spent putting security technology in front of users, we remain just as vulnerable to cyberattacks that emerged in the AOL network in the 1990s. It’s time we changed this — and built security around users.

A version of this article can be found here: https://www.darkreading.com/risk/build-security-around-users-a-human-first-approach-to-cyber-resilience

Time to Change Our Flawed Approach to Security Awareness [Published in Dark Reading]

Photo by Philipp Katzenberger on Unsplash

Defend against phishing attacks with more than user training. Measure users’ suspicion levels along with cognitive and behavioral factors, then build a risk index and use the information to better protect those who are most vulnerable.

 

 

As Russian tanks creaked into Ukraine, CEOs and IT managers throughout the United States and much of the free world started sending out emails warning their employees about impending spear-phishing attacks.

It made sense: Spear-phishing was what Russians had used on Ukrainians many times in the past half of a decade, such as when they shut down the country’s electrical grid on one of its coldest winter nights. It was also what the Russians had used against the Democratic National Committee and targets across the US.

At one end, the email missives from CEOs were refreshing. People were serious about the threat of phishing, which wasn’t the case in 2014 when I started warning about its dangers on CNN.

At the other end, it was sobering. There wasn’t much else organizations had figured out to do.

Sending messages to warn people was what AOL’s CEO resorted to back in 1997, when spear-phishing first emerged and got its name. Budding hackers of the time were impersonating AOL administrators and fishing for subscribers’ personal information. That was almost three decades ago, many lifetimes in Internet years.

In the interim, organizations have spent billions on security technologies and countless hours in security training. For context, a decade ago, Bank of America (BoA) was spending $400 million on cybersecurity. It now spends $1 billion per year on it. Yet thousands of its customer accounts in California were hacked last year.

And BoA isn’t alone. This year, Microsoft, Nvidia, Samsung, LG, and T-Mobile — which recently paid out a $350 million settlement to customers because of a breach in 2021 — were hacked. All fell victim to spear-phishing attacks. No question that the employees in these companies are experienced and well-trained in detecting such attacks.

Photo by King’s Church International on Unsplash

Flawed Approach

Clearly, something is fundamentally flawed in our approach, when you consider that after all this, email-based compromises increased by 35% in 2021, and American businesses lost over $2.4 billion due to it.

A big part of the problem is the current paradigm of user training. It primarily revolves around some form of cyber-safety instruction, usually following a mock phishing email test. The tests are sent periodically, and user failures are tracked — serving as an indicator of user vulnerability and forming the backbone of cyber-risk computations used by insurers and policymakers.

There is limited scientific support for this form of training. Most point to short-term value, with its effects wearing off within hours, according to a 2013 study. This has been ignored since the very inception of awareness as a solution.

There is another problem. Security awareness isn’t a solution; it’s a product with an ecosystem of deep-pocketed vendors pushing for it. There is legislation and federal policy mandating it, some stemming from lobbying by training organizations, making it necessary for every organization to implement it and users to endure it.

Finally, there is no valid measurement of security awareness. Who needs it? What type? And how much is enough? There are no answers to these questions.

Instead, the focus is on whether users fail a phishing test without a diagnosis of the why — the reason behind the failures. Because of this, phishing attacks continue, and organizations have no idea why. Which is why our best defense has been to send out email warnings to users.

Defend With Fundamentals

The only way to defend against phishing is to start at the fundamentals. Begin with the key question: What makes users vulnerable to phishing?

The science of security already provides the answers. It has identified specific mind-level or cognitive factors and behavioral habits that cause user vulnerability. Cognitive factors include cyber-risk beliefs — ideas we hold in our minds about online risk, such as how safe it might be to open a PDF document versus a Word document, or how a certain mobile OS might offer better protection for opening emails. Many such beliefs, some flawed and others accurate, govern how much mental attention we pay to details online.

Many of us also acquire media habits, from opening every incoming message to rituals such as checking emails and feeds the moment we awake. Some of these are conditioned by apps; others by organizational IT policy. They lead to mindless reactions to emails that increase phishing vulnerability.

There is another, largely ignored, factor: suspicion. It is that unease when encountering something; that sense that something is off. It almost always leads to information seeking and, armed with the right types of knowledge or experience, leads to deception-detection and correction.

It did for the former head of the FBI. Robert Muller, after entering his banking information in response to an email request, stopped before hitting Send. Something didn’t seem right. In the momentary return to reason caused by suspicion, he realized he was being phished, and changed his banking passwords.

By measuring suspicion along with the cognitive and behavioral factors leading to phishing vulnerability, organizations can diagnose what makes users vulnerable. This information can be quantified and converted into a risk index, with which they can identify those most at risk, the weakest links, and protect them better.

Doing this will help us defend users based on a diagnosis of what they need, rather than a training approach that’s being sold as a solution — a paradigm that we know doesn’t work.

After billions spent, our best approach remains sending out email warnings about incoming attacks. Surely, we can do better. By applying the science of security, we can. And we must — because spear-phishing presents a clear and present danger to the Internet.

*A version of this article appeared here: https://www.darkreading.com/vulnerabilities-threats/time-to-change-our-flawed-approach-to-security-awareness