Artificial intelligence. It’s everyone’s favorite dinner party topic, sure. But what’s actually going on out there? Who’s actually using it? What are they doing with it? What do they think about it—and about the companies that design it? Ultimately, what does all of this mean for security risk?
We had more questions than a toddler in a toy store. So, as part of the Oh, Behave! – The Annual Cybersecurity Attitudes and Behaviors Report 2024 – 2025, we went deep on the fascinating relationship between people and GenAI for the first time.
But…let’s zoom out for a moment: Oh, Behave! offers a global snapshot of cybersecurity attitudes and behaviors. It’s a joint effort between CybSafe and the NCA, and it’s now in its fourth year (and going from strength to strength, if we do say so ourselves).
But anyway, back to GenAI. The results are both illuminating and perplexing. There’s no time to waste—let’s get right to it.
People are scared of GenAI…AND people use it all. the. time.
We can’t ignore this paradox at the heart of the GenAI story: widespread fear coexists with keen adoption. While 56% told us they don’t use any GenAI tools, those who said they do are frequent users.
What’s behind these fears? For one thing, people told us they are concerned about job displacement, privacy erosion, and the potential for malicious use.
GenAI tool usage is highest amongst the younger generations. 72% of Gen Z participants use it at work, home, or both, followed by 62% of Millennials, 38% of Gen X, 15% of Baby Boomers, and just 7% of the Silent Generation.
72%
Gen Z
62%
Millennials
38%
Gen X
15%
Baby Boomers
7%
Silent Generation
There’s a concerning GenAI security shortfall
So, what’s occurring at the intersection of GenAI and security risk? Naturally, this is something we wanted to explore in depth. Given GenAI’s potential as a tool for both cyber attackers and defenders, it’s no surprise GenAI has put itself firmly on the cybersecurity map.
Brace yourself for this one: A staggering 38% of employed participants admitted to sharing sensitive work information with GenAI without their employer’s knowledge.
And yet, 65% of participants expressed concern about GenAI-related cybercrime. Given how GenAI can be used to create highly targeted and personalized attacks, this fear isn’t unfounded.
But the research makes it clear protective measures are failing to keep up. For instance, over half of employed participants have yet to receive any training on how to use GenAI safely. We were left in no doubt having seen the figures: There is an urgent need for improved AI security strategies.
There’s a giant GenAI trust deficit
Most people agree trust is the cornerstone of any relationship—and that includes the one between people and GenAI. Our research reveals a profound trust deficit in GenAI, and this was particularly pronounced in the younger generations (yes, the very same people more likely to use GenAI in the first place).
What’s behind this lack of trust? The data show it’s rooted in concerns about data privacy, algorithmic bias, and the perceived lack of transparency in GenAI development. The picture shifts when it comes to trust in GenAI companies: the older you are, the less trusting you’re likely to be that the companies can be trusted to do the right thing.
People are worried about GenAI stealing their jobs, too. 41% of employed participants are concerned about AI affecting their employment status, while 45% are worried about changes to the nature of their work.
And for sure, AI-driven automation and augmentation is reshaping job roles and industries. But while there’s apprehension about job displacement, there will also be new opportunities. But not without equipping the workforce with the skills needed to thrive in this GenAI-driven economy.
Ultimately, GenAI is a human story
What was once a concept confined to quaint 1950s science fiction is now a key part of our day to day. The awe over GenAI is understandable—but GenAI didn’t fall from the sky like an alien seed pod. It’s a human-made tool shaped and influenced by our values, decisions, and behaviors.
This is a pivotal moment. To ensure GenAI is a force for good, we must collaborate to establish ethical guardrails, improve digital literacy, and develop a deeper trust.
It starts with facing the facts. Oh, Behave! provides unmissable insights into people’s cybersecurity behaviors and attitudes, along with data-driven recommendations for businesses, policymakers, and individuals looking to manage human risk and strengthen their cybersecurity.
Download the full Oh, Behave! report to delve deeper into these findings and discover actionable insights for your organization.