Heads up, this piece takes aim at “security awareness dogma”. You have been duly warned.
It exposes why, despite good intentions and evidence to the contrary, most security leaders and security professionals remain closed-minded to the reality of the human aspect of cybersecurity.
It challenges the assumptions that have held back the industry for decades, and advocates for a shift toward evidence-based, data-led Human Risk Management (HRM).
About the author
Oz Alashe MBE is the CEO and Founder at CybSafe, a behavioural science and data analytics company that builds cybersecurity software to better manage human risk.
A former UK Special Forces Lieutenant Colonel, Oz is focused on making society more secure by helping organisations address the human aspect of cybersecurity.
Oz has extensive experience and understanding in the areas of intelligence insight, complex human networks, and human cyber risk and resilience.
In the security awareness space, dogma and myths are everywhere. And they’re holding the security community back.
No one wants to admit it. Let alone challenge it.
No one wants to slay the sacred cow. But it’s an ineffective comfort blanket, and it’s not backed up by meaningful data.
But what exactly do I mean by dogma?
By definition, dogma is a set of principles and ideas presented as absolute truth.
They often appear logical at first glance. Even though in most cases it’s bollocks and there is little-to-no data to back up these beliefs. And when you dig into them, you realise that while they’re logical, they’re not necessarily true.
Nevertheless, they continue to shape security programs and influence professionals who don’t stop to question them.
It’s time we did.
Some of the people that use this phrase are well-intentioned. In fact, some of them even use it to try to elevate the importance of people. They think they are helping and being enlightened.
They might say:
On the face of it, these are all ways of saying that cybersecurity is not complete unless it accounts for how people behave.
Which, of course, is true.
The issue is this: People being “the problem” isn’t a helpful way to express this situation. And it’s not accurate. It’s overly negative. It’s lazy. And it doesn’t really make sense.
Saying that humans are the weakest link (in cybersecurity)” is a bit like saying the weakest link in a sports team is all the players. Besides, is there any other “link” of relevance? A mistake doesn’t make you “weak” in this context.
The person is part of the system, just like the technology, the processes, the policies. That system has multiple moving parts that act in lots of different ways and are under different constraints and pressures.
Good news: Academia really is doing an incredible job of debunking the “weakest link” view. There’s an enlightening study on human factors by Shari Pfleeger, Angela Sasse and Adrian Furnham, which goes some way to addressing this. It’s well worth reading.
The technology we put in place for our people must be usable. We must do more to create conditions and environments in which we make it easy to be secure. We need to design systems, technology, and processes that make doing the right thing the easiest thing.
And we need to engage our people. To help them understand how they can be secure without compromising their productivity. We need to stop overloading them smashing their “compliance budget” when we know there is only so much people can absorb. Awareness ‘training’ simply isn’t enough on its own.
My prediction is that we’ll start to see fewer people using the “weakest link” language as more of us start to genuinely understand human behaviours and the impact this has on security.
The sooner we do that, the better.
Many people who talk about people being the weakest link simply haven’t stopped to think. They mean well. But there are far better ways to acknowledge the importance of the user.
The bottom line: It’s high time we retired this tired trope.
This one is everywhere, and it comes in many forms, like:
These statements are just nonsense. Let me explain.
There is no evidence that more training equals better security behaviours. In fact, there is evidence to suggest that we can train people too much and create an over-reliance on training.
Think about it: There are doctors who smoke. Knowledge in, behaviour change out is just not how behaviour change works.
So, this means that we need to stop and think: What do we actually want?
Do we want people who know and understand more?
Or… Do we actually want behaviour change, and it’s more that we believe awareness is the route there?
If the goal is behaviour change, we need to look beyond education. Because the truth is, you can change behaviour without increasing knowledge.
And when it comes to cybersecurity, behaviour change—not awareness—is what reduces risk.
Stop me if you’ve heard this one before:
Like the last set of myths, this is a pervasive one. There are countless cybersecurity vendors selling interactive “fun” training, competitions, and games based around security awareness.
It’s hardly surprising, therefore, that we have whole security teams who genuinely believe that if they could just increase “employee engagement in security” they would see better security behaviours being exhibited.
But again, just like the last example, there is no scientific evidence to back this up.
(Plus, WTF does “engagement” mean anyway? And how many professionals out there are actually attempting to measure it?)
So, we know that lots of people out there will try to tell you that security awareness is about more than “awareness”. They may throw around the word “culture”. They’ll probably conflate knowledge and behaviour change.
What’s happening here is an unhelpful blurring of lines, and mixing an action with a desired outcome. It’s extra-strength wishful thinking.
Spend a few minutes reading about behavioural science and you won’t be able to unsee it: behaviour change requires so much more than awareness.
If we’re being honest with ourselves, for as long as we term it “awareness”, the human aspect of cybersecurity will forever be constrained to training, education, and communications.
This simply isn’t enough to influence human behaviour. Therefore, it’s not enough to impact cybersecurity risk.
Few phrases get thrown around as freely as “security culture”.
But most people who use the term aren’t clear on its true meaning. Ask a bunch of professionals and you’ll get a range of answers like:
In other words, the phrase means all things to all people. And people love to talk about it, regardless of whether their security culture is making much of a difference to actual human behaviour.
What is security culture? Well, it’s the shared values, attitudes, perceptions and beliefs that shape what happens (from a security perspective) in an organisation or group.
When we fail to define it, we end up with another vague, feel-good term that won’t translate into real risk reduction.
When it comes to the human aspect of cybersecurity, most organisations’ desired security outcomes rely on better security behaviours, as well as values, attitudes, and beliefs (culture) that support these outcomes.
Good communication, messaging, training, and education can’t deliver these things on their own. In fact, they often waste resources, take up time, and even get in the way.
But this type of security awareness dogma often blinds organisations to more effective alternatives, like:
These are all techniques and tools that can be used to reduce human cyber risk and impact the human aspect of cybersecurity WITHOUT an over-reliance on training, education, and communications. And they are often more scientific, measurable and data-driven.
Bottom line: There is far more to managing and reducing human cyber risk than training, educating, and raising awareness!
Many organisations mistakenly believe that Security Awareness metrics—like training completion rates, engagement statistics, and simulated phishing click/report rates—directly correlate to behaviour change.
The truth? These are just surface-level indicators that fail to capture real security behaviours.
These are tick-box metrics. They tell us whether someone has clicked something. They don’t really tell us whether the full gamut of human risk has improved.
No, they are not, they simply measure one or two simulated behaviours: people clicking, and people reporting.
However, there is so much more to security behaviour than phishing clicks and reporting phishing simulations. But most security professionals only focus on phishing, and genuinely believe that if they are addressing phishing, they are doing enough to influence behaviour. This is wrong.
This is why SebDB was created.
The Security Behaviour Database (or SebDB) is the world’s most comprehensive Security Behaviour Database. It’s an ever-growing digital compendium that contains information on the security behaviours known to reduce human cyber risk. It’s also an open-source research initiative that is driven and managed by CybSafe.
It’s freely accessible. It’s open to everyone.
(I know—it’s almost like everyone at CybSafe is unashamedly obsessed with the scientific evidence base for everything in this space, right? 😏)
Welcome to the bonus round, because I couldn’t end this without including some classic persistent myths not yet covered, like:
Think about it—how is this possible when phishing is just one aspect of security, not a holistic measure of human risk?
Real risk management needs a mix of behavioural interventions and system design.
Knowledge doesn’t necessarily translate into action. Remember those smoking doctors?
True nudges are scientifically designed to influence behaviour, not just pop up with a reminder to “USE STRONG PASSWORDS!!!”
These myths reinforce outdated ideas and beliefs.
These myths keep organisations stuck in ineffective Security Awareness programs.
These myths have as much relevance today as floppy disks and fax machines.
This dogma and these myths have a grip on the minds of many security leaders and professionals. They influence where leaders put their time, money, and attention.
Yes, some of us are shaking themselves free of the dogma. But many still don’t realise (or don’t want to admit) that there is very little evidence or data to support any of the statements in the examples above.
Here’s the kicker: Almost everyone reading this post will have believed one or more of the statements at some point. Maybe you still do. That’s okay. Security Awareness dogma is strong. Its grip is tight.
But it’s 2025.
Some of the things that security professionals consider hard facts is utter nonsense. There simply isn’t any evidence for it, no matter how positive or logical it sounds.
And the truth is, when most of us genuinely stop to think, deep down we know we’re buying into the dogma.
But we shouldn’t.
What’s more, if we don’t stop, we will keep doing the same old things over and over again.
And, like Einstein said, the definition of insanity is doing the same thing over and over again, but expecting different results.
The human aspect of cybersecurity demands a lot more substance, evidence and data.
Data, science, and evidence should always trump dogma. The first step? Question what you think you know. Certainty is the enemy of progress. It starts with opening your mind.
Put simply, you’re probably stuck in the grip of security awareness dogma if you:
I advocate for a whole new approach, based on the effective application of:
This combination enables teams to monitor and minimize the likelihood and impact of user-related cybersecurity incidents.
Some people refer to this as Human Risk Management (HRM).
HRM is about:
Interest piqued? Read more about HRM here.
We’re witnessing advancements at an unprecedented pace.
Thanks to leaps forward in behavioural science, automation, and data analytics—driven by research from companies like CybSafe—there is a whole new world of possibility when it comes to the effective management of human risk:
Examples include:
It’s never been easier to drop the dogma and move toward Human Risk Management.
The question is: Are you ready?
We’re obsessed with human behaviour. Measuring it. Understanding it. Influencing it.
We help people to make better choices.
Our software influences human behaviour to outsmart cyberthreats. We do this through science-backed interventions.
We believe in people. We believe in a safer digital future for all.