Select Page

The dogma of security awareness: Exposing cybersecurity’s biggest blind spot

CYBSAFE-SebDB Webinar-preblog-221011MS-36

4 March 2025

Dogma #1

“Humans are the weakest

link.”

Dogma #2

“Security Awareness

training = better

behaviour”

Dogma #3

“If we can nail

engagement, we’ll nail

risk reduction.”

Dogma #4

“Security Awareness is

*actually* about so much

more than awareness.”

Dogma #5

“Security culture is the

golden ticket to risk

reduction.”

Dogma #6

“Good communication,

messaging, training, and

education = good

security outcomes.”

Dogma #7

“Tracking metrics means

lower risks.”

Dogma #8

“Phishing simulations 

are a great way to

measure human security

risk.”

Bonus round

Even more dogma examples

Heads up, this piece takes aim at “security awareness dogma”. You have been duly warned.

It exposes why, despite good intentions and evidence to the contrary, most security leaders and security professionals remain closed-minded to the reality of the human aspect of cybersecurity.

It challenges the assumptions that have held back the industry for decades, and advocates for a shift toward evidence-based, data-led Human Risk Management (HRM).

About the author

Oz Alashe MBE is the CEO and Founder at CybSafe, a behavioural science and data analytics company that builds cybersecurity software to better manage human risk.

A former UK Special Forces Lieutenant Colonel, Oz is focused on making society more secure by helping organisations address the human aspect of cybersecurity.

Oz has extensive experience and understanding in the areas of intelligence insight, complex human networks, and human cyber risk and resilience.

What seems to be the problem here?

In the security awareness space, dogma and myths are everywhere. And they’re holding the security community back.

No one wants to admit it. Let alone challenge it.

No one wants to slay the sacred cow. But it’s an ineffective comfort blanket, and it’s not backed up by meaningful data.

But what exactly do I mean by dogma?

By definition, dogma is a set of principles and ideas presented as absolute truth.

They often appear logical at first glance. Even though in most cases it’s bollocks and there is little-to-no data to back up these beliefs. And when you dig into them, you realise that while they’re logical, they’re not necessarily true.

Nevertheless, they continue to shape security programs and influence professionals who don’t stop to question them.

It’s time we did.

“Humans are the weakest link.”

Some of the people that use this phrase are well-intentioned. In fact, some of them even use it to try to elevate the importance of people. They think they are helping and being enlightened.

They might say:

  • “People are the weakest link”, or
  • “Users are the single biggest problem in cyber security”, or
  • “The largest problem sits between the chair and the computer.”

On the face of it, these are all ways of saying that cybersecurity is not complete unless it accounts for how people behave.

Which, of course, is true.

The issue is this: People being “the problem” isn’t a helpful way to express this situation. And it’s not accurate. It’s overly negative. It’s lazy. And it doesn’t really make sense.

Saying that humans are the weakest link (in cybersecurity)” is a bit like saying the weakest link in a sports team is all the players. Besides, is there any other “link” of relevance? A mistake doesn’t make you “weak” in this context.

The person is part of the system, just like the technology, the processes, the policies. That system has multiple moving parts that act in lots of different ways and are under different constraints and pressures.

Good news: Academia really is doing an incredible job of debunking the “weakest link” view. There’s an enlightening study on human factors by Shari Pfleeger, Angela Sasse and Adrian Furnham, which goes some way to addressing this. It’s well worth reading.

The technology we put in place for our people must be usable. We must do more to create conditions and environments in which we make it easy to be secure. We need to design systems, technology, and processes that make doing the right thing the easiest thing.

And we need to engage our people. To help them understand how they can be secure without compromising their productivity. We need to stop overloading them smashing their “compliance budget” when we know there is only so much people can absorb. Awareness ‘training’ simply isn’t enough on its own.

My prediction is that we’ll start to see fewer people using the “weakest link” language as more of us start to genuinely understand human behaviours and the impact this has on security.

The sooner we do that, the better.

Many people who talk about people being the weakest link simply haven’t stopped to think. They mean well. But there are far better ways to acknowledge the importance of the user.

The bottom line: It’s high time we retired this tired trope.

“Security Awareness training = better behaviour”

This one is everywhere, and it comes in many forms, like:

  • “The way to positively influence the human aspect of cybersecurity and user security behaviours is through training and education.”
  • “Great Security Awareness education and training is generally a good use of workforce time.”
  • “Education and engagement = better user behaviour (behaviour change) and reduced cyber risk.”
  • “If we deliver great/better Security Awareness content and training then people will engage, learn and apply their knowledge, which will result in improved security behaviour.”
  • “Education and engagement automatically lead to reduced cyber risk. They are a good thing irrespective of whether they actually change the security behaviours that matter.”

These statements are just nonsense. Let me explain.

There is no evidence that more training equals better security behaviours. In fact, there is evidence to suggest that we can train people too much and create an over-reliance on training.

Think about it: There are doctors who smoke. Knowledge in, behaviour change out is just not how behaviour change works.

So, this means that we need to stop and think: What do we actually want?

Do we want people who know and understand more? 

Or… Do we actually want behaviour change, and it’s more that we believe awareness is the route there?

If the goal is behaviour change, we need to look beyond education. Because the truth is, you can change behaviour without increasing knowledge.

And when it comes to cybersecurity, behaviour change—not awareness—is what reduces risk.

“If we can nail engagement, we’ll nail risk reduction.”

Stop me if you’ve heard this one before:

  • “Education and engagement = better user behaviour (behaviour change) and reduced cyber risk.”

Like the last set of myths, this is a pervasive one. There are countless cybersecurity vendors selling interactive “fun” training, competitions, and games based around security awareness.

It’s hardly surprising, therefore, that we have whole security teams who genuinely believe that if they could just increase “employee engagement in security” they would see better security behaviours being exhibited.

But again, just like the last example, there is no scientific evidence to back this up.

(Plus, WTF does “engagement” mean anyway? And how many professionals out there are actually attempting to measure it?)

“Security Awareness is *actually* about so much more than awareness.”

  • “Security Awareness means so much more than just awareness. Effective Security Awareness improves more than awareness. It improves the whole culture.”

So, we know that lots of people out there will try to tell you that security awareness is about more than “awareness”. They may throw around the word “culture”. They’ll probably conflate knowledge and behaviour change.

What’s happening here is an unhelpful blurring of lines, and mixing an action with a desired outcome. It’s extra-strength wishful thinking.

Spend a few minutes reading about behavioural science and you won’t be able to unsee it: behaviour change requires so much more than awareness.

If we’re being honest with ourselves, for as long as we term it “awareness”, the human aspect of cybersecurity will forever be constrained to training, education, and communications.

This simply isn’t enough to influence human behaviour. Therefore, it’s not enough to impact cybersecurity risk.

 “Security culture is the golden ticket to risk reduction.”

    Few phrases get thrown around as freely as “security culture”.

    But most people who use the term aren’t clear on its true meaning. Ask a bunch of professionals and you’ll get a range of answers like:

    • “It’s everything to do with the human aspect of cybersecurity.”
    • “It’s just behaviour.”
    • “A good security culture is when there’s high engagement and knowledge.”

    In other words, the phrase means all things to all people. And people love to talk about it, regardless of whether their security culture is making much of a difference to actual human behaviour.

    What is security culture? Well, it’s the shared values, attitudes, perceptions and beliefs that shape what happens (from a security perspective) in an organisation or group.

    When we fail to define it, we end up with another vague, feel-good term that won’t translate into real risk reduction.

    “Good communication, messaging, training, and education = good security outcomes.”

      When it comes to the human aspect of cybersecurity, most organisations’ desired security outcomes rely on better security behaviours, as well as values, attitudes, and beliefs (culture) that support these outcomes.

      Good communication, messaging, training, and education can’t deliver these things on their own. In fact, they often waste resources, take up time, and even get in the way.

      But this type of security awareness dogma often blinds organisations to more effective alternatives, like:

      • Persuasion: Attempts to change attitudes or behaviours or both (without using coercion or deception)
      • Behaviour nudges: Subtle attempts to influence behaviours and choices without restricting freedom or economic incentives
      • Reminders: Direct prompts or alerts designed to jog memory
      • Goal setting: Planned steps to reach a target
      • Incentivization: Creating expectation of reward
      • Coercion: Creating expectation of punishment or cost
      • Restriction: Using rules or controls to reduce the opportunity to engage in the target behaviour
      • Environmental restructuring: Changing the physical or social context
      • Modeling: Providing an example for people to aspire to or imitate
      • Enablement: Increasing means / reducing barriers to increase capability or opportunity

      These are all techniques and tools that can be used to reduce human cyber risk and impact the human aspect of cybersecurity WITHOUT an over-reliance on training, education, and communications. And they are often more scientific, measurable and data-driven.

      Bottom line: There is far more to managing and reducing human cyber risk than training, educating, and raising awareness!

      “Tracking metrics means lower risks.”

        • “Security Awareness education, training and simulated phishing adequately address security behaviours, and mitigate risk.”
        • “Training completion, engagement and simulated click/report rates are an appropriate way to measure effectiveness or impact on human risk.”

        Many organisations mistakenly believe that Security Awareness metrics—like training completion rates, engagement statistics, and simulated phishing click/report rates—directly correlate to behaviour change.

        The truth? These are just surface-level indicators that fail to capture real security behaviours.

        These are tick-box metrics. They tell us whether someone has clicked something. They don’t really tell us whether the full gamut of human risk has improved.

        “Phishing simulations are a great way to measure human security risk.”

            No, they are not, they simply measure one or two simulated behaviours: people clicking, and people reporting.

            However, there is so much more to security behaviour than phishing clicks and reporting phishing simulations. But most security professionals only focus on phishing, and genuinely believe that if they are addressing phishing, they are doing enough to influence behaviour. This is wrong.

            This is why SebDB was created.

            The Security Behaviour Database (or SebDB) is the world’s most comprehensive Security Behaviour Database. It’s an ever-growing digital compendium that contains information on the security behaviours known to reduce human cyber risk. It’s also an open-source research initiative that is driven and managed by CybSafe.

            It’s freely accessible. It’s open to everyone.

            (I know—it’s almost like everyone at CybSafe is unashamedly obsessed with the scientific evidence base for everything in this space, right? 😏)

            Even more dogma examples

                Welcome to the bonus round, because I couldn’t end this without including some classic persistent myths not yet covered, like:

                • “By reducing the rate of phishing clicks and phishing reporting we have had an adequate impact on the security behaviours that matter most to our organisations risk.”

                Think about it—how is this possible when phishing is just one aspect of security, not a holistic measure of human risk?

                • “Phishing simulations, Security Awareness training and corporate security messaging is all that needs to be done to adequately address the human aspect of cyber risk.”

                Real risk management needs a mix of behavioural interventions and system design.

                • “Measuring ‘skills and knowledge’ is the same as measuring ‘behaviour’.”

                Knowledge doesn’t necessarily translate into action. Remember those smoking doctors?

                • “Any message, prompt or notification is a ‘nudge’.”

                True nudges are scientifically designed to influence behaviour, not just pop up with a reminder to “USE STRONG PASSWORDS!!!”

                These myths reinforce outdated ideas and beliefs.

                These myths keep organisations stuck in ineffective Security Awareness programs.

                These myths have as much relevance today as floppy disks and fax machines.

                Break free from security awareness dogma—starting right now

                This dogma and these myths have a grip on the minds of many security leaders and professionals. They influence where leaders put their time, money, and attention.

                Yes, some of us are shaking themselves free of the dogma. But many still don’t realise (or don’t want to admit) that there is very little evidence or data to support any of the statements in the examples above.

                Here’s the kicker: Almost everyone reading this post will have believed one or more of the statements at some point. Maybe you still do. That’s okay. Security Awareness dogma is strong. Its grip is tight.

                But it’s 2025.

                Some of the things that security professionals consider hard facts is utter nonsense. There simply isn’t any evidence for it, no matter how positive or logical it sounds.

                And the truth is, when most of us genuinely stop to think, deep down we know we’re buying into the dogma.

                But we shouldn’t.

                What’s more, if we don’t stop, we will keep doing the same old things over and over again.

                And, like Einstein said, the definition of insanity is doing the same thing over and over again, but expecting different results.

                The human aspect of cybersecurity demands a lot more substance, evidence and data.

                Data, science, and evidence should always trump dogma. The first step? Question what you think you know. Certainty is the enemy of progress. It starts with opening your mind.

                How can you tell if you’re stuck in security awareness dogma?

                Put simply, you’re probably stuck in the grip of security awareness dogma if you:

                1. Aren’t measuring security behaviours beyond simulated phishing clicks and reports.
                2. Spend most of your time designing, deploying, and managing Security Awareness training and simulated phishing campaigns.
                3. Don’t know whether you are influencing the specific security behaviours associated with the risk outcomes most relevant to your organisation.
                4. Haven’t considered or evaluated the plethora of human risk interventions beyond education, training and raising awareness.
                5. Haven’t automated real-time HRM interventions or made use of the available integrations within your existing tech stack.
                6. Have no way of reporting on workforce-related cyber risk beyond SAT and phishing compliance metrics.
                7. Aren’t measuring the effectiveness (or ineffectiveness) of your interventions, activities or initiatives.

                So, what to do instead of buying into security awareness dogma?

                I advocate for a whole new approach, based on the effective application of:

                1. Scientific, evidence-based behavioural influence practices
                2. Automated and real-time interventions (helping people at the right time, rather than just giving them more training to do!)
                3. Adaptive technology controls, (i.e. Adjusting a user’s access privileges based on a continuously calculated risk behaviour score, or adjusting system or data access for specific users, based on other pre-determined HRM criteria)
                4. Data & analytics

                This combination enables teams to monitor and minimize the likelihood and impact of user-related cybersecurity incidents.

                Some people refer to this as Human Risk Management (HRM).

                HRM is about:

                • Detecting and measuring human security behaviours, and quantifying the human risk.
                • Identifying and influencing risky behaviours, and instilling a positive security culture.
                • Moving beyond content and experiential learning.
                • Human risk factors measured through third party data integrations.
                • Targeted behavioural, policy-based, and contextual interventions.
                • Demonstrable impact on cyber risk reduction.

                Interest piqued? Read more about HRM here.

                A more progressive approach for today and the future

                We’re witnessing advancements at an unprecedented pace.

                Thanks to leaps forward in behavioural science, automation, and data analytics—driven by research from companies like CybSafe—there is a whole new world of possibility when it comes to the effective management of human risk:

                Examples include:

                • Adjusting a user’s access privileges based on a continuously calculated risk behaviour score
                • Adjusting system or data access for specific users, based on other pre-determined HRM criteria
                • Delivering security training and education (including simulated phishing) to specific users or groups
                • Delivering targeted security communications to specific users or groups
                • Delivering other targeted behaviour interventions to specific users or groups
                • Notifying security professionals or teams about HRM events or incidents
                • Triggering ‘consequence management’ procedure for specific users based on security behaviour
                • Informing line managers or leaders about important HRM issues based on user activity or behaviour

                It’s never been easier to drop the dogma and move toward Human Risk Management.

                The question is: Are you ready?

                We are CybSafe

                We’re obsessed with human behaviour. Measuring it. Understanding it. Influencing it.

                We help people to make better choices.

                Our software influences human behaviour to outsmart cyberthreats. We do this through science-backed interventions.

                We believe in people. We believe in a safer digital future for all.

                Behave Hub newsletter CybSafe

                Do one more thing right today. Subscribe to the Behave newsletter

                You may also like

                Can BS make SA&T stick? Hot takes from the experts…

                Can BS make SA&T stick? Hot takes from the experts…

                Using insights from “Oh, Behave!” to strengthen security training and drive lasting behavioral change Security training. It’s as commonplace in an organization as writing “see attached” and forgetting to attach anything. It can help to tackle cybersecurity risks—but only when done well. Simply...

                Maximizing security awareness engagement: How the pros do it

                Maximizing security awareness engagement: How the pros do it

                Ditch mandatory training, starting riiiight…now!Want to boost security awareness? Talk about something else entirelyGet serious about funThe top mic-drop insights from our Cybersecurity Awareness Month engagement webinar We know people whose organizations make a big deal of CAM are much more...