30 March 2026
11 things RSA 2026 told us about the human side of cybersecurity
RSA 2026 made one thing clear: the human risk problem is accelerating faster than our ability to patch it. Here are 11 takeaways from the floor.

I've just stepped off the floor at RSA 2026, and I want to share what I heard. Because, beyond the neon booth displays and keynote soundbites, the real story wasn't about the tech — it was about the breaking point of the human element.

Now, don't get me wrong. I know the human element of cybersecurity isn't groundbreaking. It's been talked about for years. But this year felt a little different. The conversations were more urgent, and, in some cases, more honest than I've heard before. The conversation shifted from theoretical risk to a sharp realization: The old playbook is dead.

Here's what stood out.

1. AI-powered attacks on people aren't coming. They've already moved in.

The most cited stat at RSA wasn't about infrastructure vulnerabilities or zero-days. It was about us. Because 83% of phishing attacks and 40% of BEC (business email compromise) now involve AI-generated content. (Kaseya, 2026).

That's not a warning about the future. It's a description of the present.

What's actually changed isn't just that attacks are more convincing. It's that they're adaptive. We've moved from campaigns to conversations. The old model was a criminal crafting a lure and firing it at scale. The new model is a continuous, interactive engagement — chat, voice, social — that adjusts in real time to how a person responds. While we're still defending against static emails, criminals are running interactive social engineering. The WEF's Global Cybersecurity Outlook 2026 found that 87% of leaders now say AI-related vulnerabilities are the fastest-growing risk they face. (WEF, 2026) That framing landed differently at RSA this year. It wasn't hypothetical concern. It was recognition.

2. "Trust the channel" is dead

Deepfakes moved from novelty to operational risk this year, and RSA reflected that shift. Voice and video impersonation are now standard tools for financial fraud and bypassing access controls. Synthetic identities are now mainstream fraud tools.

The more interesting conversation, though, wasn't about the technology itself. It was about what it means for how we verify trust.

We've spent years training people to "check the sender" and "look for the padlock". Those traditional trust signals are now compromised. The question being asked at RSA — without a clean answer yet — is what replaces them.

The emerging answer is behavioral and workflow-based verification. Not "does this look like Dave?" but "does this behave like Dave, in a context where Dave would plausibly be acting this way?"

That's a fundamentally different model of trust. And most organizations haven't started building it.

3. Agentic AI is inheriting our bad habits

The buzzword of RSA 2026 was "agentic". It was in every keynote, every booth, and every hallway conversation. But amid the excitement, one implication for human risk kept surfacing that hasn't received nearly enough attention.

The workforce is becoming hybrid. Humans and AI agents are working alongside each other, with agents increasingly taking on tasks, making decisions, and acting autonomously within enterprise environments. What's happening slowly — or not at all — is our understanding of the relationship between human behavior and agentic behavior.

Here's the problem: Agents are downstream of human behavior. They're built on models trained on human-generated data, designed by humans with particular assumptions, and deployed into workflows shaped by human habits. The biases, blind spots, and security-relevant behaviors we haven't yet addressed in our people are being quietly encoded into the systems we're now trusting to act on our behalf.

If an agent automates a procurement workflow, it often inherits the risk tolerance of its designer. If it handles comms, it often inherits the social engineering vulnerabilities of the humans it was modelled on.

If human behavior is the unsolved problem in cybersecurity, and agentic behavior is downstream of human behavior, then we haven't contained the problem. We've scaled it.

4. Insider risk is real. And it just got a lot more crowded.

Insider risk programs are real, funded, and necessary. Nothing at RSA suggested otherwise.

But the conversations I heard made clear that the boundaries of what counts as 'insider' are expanding in ways most programs haven't kept pace with. The line between internal and external has dissolved. And RSA made it clear that insider risk programs are failing to keep pace with synthetic or non-malicious insiders.

The patterns showing up now go well beyond the malicious employee or the careless contractor. People are acting on AI-generated instructions that appear to come from trusted sources. Compromised accounts are behaving in ways that look indistinguishable from deliberate insider action. AI tools with delegated permissions are taking privileged actions with no human in the loop at all.

When an employee acts on AI-generated instructions that look like they came from the CEO, is that an insider threat or an external attack? When an autonomous agent with delegated permissions leaks data, who is responsible?

Malicious vs negligent. Internal vs external. Human actor vs automated system. These distinctions are blurring — and the risk sits in the gaps between them.

That's not an argument for dismantling insider risk programs. It is an argument for expanding what they cover. The organizations whose insider risk function only looks inward, at known employees with known intent, are going to miss a growing share of the threats that look exactly like insider incidents, but don't originate the way they expect.

5. Shadow AI is a human risk problem, not just an IT one

Unsanctioned AI tool use came up in almost every conversation about insider risk at RSA. Employees are bypassing sanctioned tools for faster, unapproved AI workarounds. This is rarely a middle-finger to IT, it's a response to friction. The approved alternatives are often slower or less capable, so data is being fed into LLMs without governance, and sensitive information is leaking through tools that aren't on anyone's asset register.

This is routinely framed as an IT or procurement failure. It isn't. It's a behavioral one.

People make decisions about AI tools the same way they make decisions about every other security-relevant behavior — based on friction, incentives, norms, and the gap between what they're told and what they see modeled around them.

Locking down tool access addresses the symptom (temporarily). To address the cause, you have to understand the behavioral incentives that drive people to unsanctioned tools in the first place.

Until we treat Shadow AI as a human behavior problem, we're just playing whack-a-mole.

One emerging sub-theme worth watching: AI agents are increasingly being granted permissions and acting autonomously within enterprise environments. They're behaving like identities. Which means they need to be governed like identities — and that governance problem is as much behavioral as it is technical.

6. Collaboration tools are the new email

71% of security leaders now expect a material impact on their business from attacks via Slack, Teams, and Zoom. (Mimecast, State of Human Risk 2026) RSA made it clear that this isn't hypothetical.

The reason these channels are valuable to attackers is the same reason they're valuable to organizations: high trust, low scrutiny, and real-time interaction. People question an unexpected email from a supplier. They don't question a Teams message from a colleague.

We've spent 20 years hardening email, but our collaboration channels are wide open. Attackers are now pivoting seamlessly: starting with a phishing email, before moving to a Teams conversation, a voice call, and a file share — each step reinforcing the last, each in a different channel, with different controls.

Because the controls maturity in these environments is years behind email. Phishing simulations, content filtering, DLP — these are relatively well understood in email. But when it comes to collaboration tools, most organizations are starting from scratch.

7. The human attack surface is expanding faster than the tools to map it

Several sessions at RSA touched on this, though not always using this framing. The human attack surface — i.e. everything that can be used to influence or manipulate a person — has grown substantially. More identities, more channels, more data exposure, more AI-mediated interaction.

But the visibility into that surface hasn't kept pace. 96% of security leaders say they have incomplete protection against human risk. (Mimecast, State of Human Risk 2026) Most of them know why: the signals exist — in email systems, IAM, endpoints, communication tools — but they're fragmented, and there's almost no mapping between those signals and actual human behavior or decision-making.

We're still relying on proxy metrics: Phishing click rates, training completion, self-reported data. These tell you almost nothing about how the people in your organization will actually perform under pressure.

The gap between data and true behavioral insight is the industry's biggest blind spot, and the most commercially significant problem in human risk right now. RSA made clear that the industry knows it — but hasn't solved it.

8. From annual training to real-time intervention

The language around traditional awareness training at RSA 2026 was noticeably different. It was more than skeptical. It was dismissive. Annual (even quarterly) training as the primary mechanism for changing human behavior barely got a serious defense.

What's replacing it is less settled, but the direction is consistent: real-time nudges, contextual interventions, and continuous monitoring that allows for targeted, timely responses rather than scheduled, generic ones.

This isn't about making training more engaging. It's a different model of how you change behavior at scale — grounded in what behavioral science has known for decades.

The future belongs to real-time intervention. People don't change because of a slide deck or a funny video; they change because the right prompt reaches them at the right moment, in the right context, with the right social reinforcement around it.

The organizations winning this battle are both starting to build those capabilities, and ditching annual compliance programs for continuous, contextual support.

9. Human and machine identity are converging

This one was more speculative at RSA — fewer definitive claims, more questions. But it's worth paying attention to.

Machine identities now significantly outnumber human users in most enterprise environments, often by an order of magnitude. Identity is no longer primarily human. The boundaries between what a person does and what a system does on their behalf are blurring. AI agents act with delegated permissions. Automated workflows trigger on human signals. And the audit trail gets harder to interpret as a result.

The implication, which a handful of sessions started to surface, is that you can no longer model risk as 'user behavior' vs 'system behavior'. And you can no longer manage user behavior in a vacuum. We need frameworks that manage interactions across identity types — and frankly, those risk management frameworks don't really exist yet.

10. Security is failing people, not the other way around.

There was a thread running through RSA that doesn't always make the official narrative, but I heard it consistently: people are overwhelmed. Not careless. Not apathetic. Overwhelmed.

More tools, more channels, more information, more change — all accelerating. Asking a cognitively overloaded employee to be your last line of defense isn't a security strategy; it's a design failure dressed up as a training problem.

A lot of what gets labeled human error is actually system error. When the insecure path is easier than the secure one, that's not a people problem. The organizations doing this well aren't putting more on their people. They're designing systems where security is built in — so people don't have to compensate for environments that weren't designed with them in mind.

We need to design for humans, not expect humans to be robots.

11. The uncomfortable truth the industry keeps circling

Across all of it, there was a recurring theme that nobody said cleanly but everyone seemed to feel: the human risk problem is accelerating faster than our ability to patch it.

AI is raising the capability of attackers against people. It's expanding the channels through which people can be manipulated. It's blurring the categories we've used to organize our defenses. And the dominant response — more training, more phishing simulations, more policy — isn't scaled to meet it. In fact, it's failing.

That's the shift RSA 2026 kept pointing toward: a move from security awareness to behavioral intelligence.

The organizations that will navigate this well won't be the ones adding more awareness videos, posters or escape rooms. They'll be the ones who invest in understanding, measuring, modeling, and influencing human behavior, based on hard evidence, not assumption.

At CybSafe, that's the problem we've built our platform around — using behavioral science and real measurement to give security teams the visibility and evidence they need to actually move the needle on human risk. If what you've read here resonates, ping us — we'd be glad to show you how we approach it.