Each October, organizations around the world dust off their cybersecurity awareness modules, launch another round of phishing tests, and reassure themselves that “training” is making people safer.
But a decade’s worth of research—including a recent Cybersecurity Dive feature by Eric Geller, where I was interviewed—shows something uncomfortable: most of these efforts don’t meaningfully change behavior.
My own peer-reviewed work has shown the same. In the SCAM model and in my book, The Weakest Link, I examine why most awareness programs fail to move the needle. They don’t address the human behavior driving the problem. They simply deliver information and hope for transformation.

Photo by Markus Winkler on Unsplash
The Evidence Is Mounting
Studies from UC San Diego Health, NIST, and others have consistently shown that conventional awareness training has little lasting effect. Employees who complete these modules perform only marginally better than those who don’t.
In many cases, engagement is so low that people simply close the training window seconds after opening it.
As click-rate metrics have come under scrutiny, vendors have pivoted to new ones—like “reporting rates,” which measure how often users flag phishing attempts. But this shift muddies the water. It moves the focus away from diagnosing why people fall for attacks and toward measuring whatever’s easiest to track.
In my book, I compared this approach to medicine:
“It’s like a doctor throwing a pill at a patient and saying, ‘Take this pill and you’ll get better,’ without ever diagnosing the problem.”
That’s the state of most corporate security awareness programs today. They treat symptoms–the clicks, the metrics, the compliance reports–without understanding the underlying cause.
Why People Click Anyway
People don’t click because they’re uninformed. They click because they’re human.
They’re multitasking, overloaded, and moving fast. They rely on shortcuts and visual cues like logos, names, and timing — and on underlying cyber-risk beliefs, the mental shortcuts that shape how safe they think they are online. All of these are exploited by phishing campaigns that mimic familiarity and trust.
The root causes are cognitive and behavioral, not educational. Yet organizations keep investing in more “awareness,” assuming information will lead to behavior change. It doesn’t.
Training Isn’t Enough
We need to stop treating training as a silver bullet and start designing systems that assume human fallibility.
As Bruce Schneier and I argued in Dark Reading, true resilience comes from redesigning systems, not blaming users.
That means:
- Reducing friction and cognitive load. Expecting employees to scan hundreds of emails a week for subtle deception is unrealistic.
- Embedding safety nets. Technical layers like anomaly detection, MFA, and spam filtering aren’t optional — they’re essential.
- Redefining resilience. Success isn’t about eliminating clicks; it’s about containing their impact.
Beyond Awareness—Toward Cyber Hygiene
This is where a new approach is emerging—one centered on cyber hygiene, not awareness.
Cyber hygiene focuses on habits, environment, and reinforcement: teaching users how to prepare, prevent, and protect themselves continuously, not just once a year.
As I often say:
“Good training isn’t about perfection. It’s about improvement—building systems and habits that make people stronger over time.”
The Real Opportunity

Photo by Glenn Carstens-Peters on Unsplash
The question isn’t whether awareness training should exist. It’s how we can make it meaningful.
That starts with treating security as a living practice—one that evolves with human behavior, not against it.
Eric Geller’s article captures this moment perfectly. We’ve hit the limits of awareness. The next frontier is resilience.
Read the full Cybersecurity Dive feature → Cybersecurity awareness training research has big flaws




















