If the words “cyber security” instantly triggers memories of hour-long awareness eLearning, videos and annual multiple-choice surveys, you’re not alone. It wasn’t that long ago when this was seen as an acceptable offering for a company’s cyber security education. This is despite numerous studies suggesting these methods are no longer (and perhaps never were) effective in achieving engagement and information retention.
However, the landscape of the workplace and the people in these environments is changing rapidly. As more sectors embrace digital transformation, they become prime targets for cyber threats. From ransomware attacks crippling power grids to phishing schemes compromising sensitive infrastructure, the stakes couldn't be higher. Yet only 5% of businesses have a cyber expert as a staff member. That means the responsibility often falls to staff members from different business areas.
So how do we equip working professionals with the knowledge to defend against cyber-attacks without disrupting their daily tasks?
To make cybersecurity education truly impactful, we need to embrace behavioural science and explore how people really learn and behave, especially when it comes to decision-making under pressure. Traditional training often relies too much on checklists, missing opportunities for real engagement. Tools such as eLearning videos are suitable when people "choose" to learn. But, like Health and Safety, security professionals are in a situation where the audience they need to engage with the most are often the most disinterested in learning.
By incorporating behavioural insights, we can design training that resonates with daily experiences and fosters lasting behaviour change. This approach not only makes training relevant but also equips individuals to manage cyber risks effectively, promoting a culture of security awareness and responsibility.
But before we get into the solution, let’s unpack the challenges of the modern-day workplace.
Remote working options have existed for years but have mostly been reserved for specific industries or smaller companies trying to reduce their overheads (e.g., office spaces). However, the outbreak of COVID-19 pandemic in 2020 led to widespread adoption of remote and hybrid working models across various industries. An Office for National Statistics (ONS) report found that the number of people in the UK working remotely had surged from 5.7% in 2019 to over 37% by the height of the pandemic in 2020 – opening the doors to cyber threats previously unseen in office environments.
For instance, the comfort of working from home can lead to complacency. While individuals typically secure their devices and store them in an office, they may feel comfortable leaving their screen unlocked on their desks at home, increasing the potential for data leaks when they subsequently return to the office or work from a cafe.
Artificial Intelligence is increasingly becoming a source of concern for those with computer-based jobs, particularly as it is being leveraged in sophisticated social engineering techniques like phishing. This is reflected in the growing trend of cybercriminals using AI to craft convincing emails or even deepfake videos that imitate colleagues, making it difficult for individuals to differentiate between legitimate communications and malicious attempts to extract sensitive information. The advancements in AI, such as Large Language Models (LLM), will undoubtedly make phishing emails more human-like and harder to detect. However, they will ultimately draw on the same principle of manipulating people out of their decision-making process (e.g., external links, personal information requests, changing bank details without checking, etc). This places the onus on recipients to identify emotional and cognitive bias-based cues and interrogate situations when they are exploited.
In today’s fast-paced work environment, balancing productivity and security has become a significant challenge for employees. Many view security measures as obstacles, prompting them to bypass protocols for the sake of speed. This tension between high productivity and robust security is particularly pronounced in remote work settings, where quick task completion often takes precedence over following security guidelines, leading to increased vulnerabilities. A study by SME insurer Superscript found that 40% of employees believe upholding cybersecurity best practices is not their responsibility. Moreover, over half (53%) of UK employees feel overworked—citing factors like maximum capacity, being spread too thin, or anxiety over additional tasks, according to Censuswide data. Therefore, it should come as little surprise that many are reluctant to take on extra cybersecurity responsibilities.
Nudge theory came to prominence with the publication of “Nudge” by Richard H. Thaler and Cass R. Sunstein in 2008, which focused on behavioural economics. They focused on “soft, parentalistic nudges,” which essentially help people make decisions that are in their best interests without limiting their choices.
Traditional approaches to changing behaviour have focused on ‘forcing people’. This can be quite direct and might require a determined effort on the part of the target to change their actions. We see this a lot in the world of cyber security, often using fear as a motivating factor. If done badly, it can result in resistance or people simply giving up.
The alternative tactic using nudges takes a gentler approach, where individuals more naturally go with the flow and make “the right” decisions independently.
Some examples of this could be:
The context in which choices are made greatly influences decisions. Choice Architecture is the idea of configuring that context, “architecting it” to influence choices in the right way. It’s important to remember that this architecture exists and will influence decisions whether it is “architected” or not! Additionally, the cognitive biases, shortcuts, and heuristics that shape our decision-making are influenced by our environment. By understanding this interplay, we can better guide individuals toward making more informed choices in real-time.
Conversely, cybercriminals tend to exploit cognitive biases such as fear, urgency, and loss aversion to steer people toward decisions that will compromise their security. By shortening the decision-making window, cybercriminals can construct scenarios where individuals feel less empowered to interrogate the legitimacy of malicious actors.
Here is an example of a scenario where a frictionless, real-time nudge can counteract the cognitive biases being leveraged by cybercriminals:
Threat: An email supposedly from the company’s IT technician instructs all recipients to download a seemingly harmless software update, saying, "Your computer is at risk! Click here to install the latest security update." This turns out to be malware that originated from an actor outside of the company.
Nudge: A pop-up appears on the screen as the user’s cursor hovers above the installation link to alert them that the email has come from outside the company. This prompts them to take a moment to check the sender’s address and notice that the email address does not match the company convention, resulting in them reporting the email to their line manager.
For instance, Cyber security companies like ThinkCyber Security have effectively deployed the nudge method. Their innovative Redflags® solution uses clear on-screen messages to alert users at the moment of risk and highlight opportunities to make safer decisions to counteract the cognitive biases being exploited.
There’s a reason those annual cybersecurity training sessions often don’t resonate. It’s not just because you didn’t have your coffee or get a full night’s sleep. A study by tech giant IBM found that nearly 30% of skills is lost annually because they aren’t performed regularly or reinforced through training.
As it happens, there’s truth in the old adage that doing a little a lot of the time is better than doing a lot a little time. Drip-feeding cybersecurity content is more effective than annual training for fostering long-term behaviour change in the workplace. This method allows employees to gradually absorb and practice security skills, which enhances retention. Instead of overwhelming staff with a yearly flood of phishing tactics, regular, bite-sized lessons help them learn to identify and respond to current threats.
This approach gives concepts time to settle and skills a chance to be applied before introducing new topics, thus preventing cognitive overload. Additionally, intervals between training sessions enable employees to implement what they’ve learned, such as using password managers or creating stronger passwords, reinforcing their understanding through real-world applications. Regular updates also ensure that employees engage with the latest cybersecurity information and practices, which is vital in the fast-changing digital landscape.
Furthermore, insights from models like BJ Fogg's Behaviour Model or the EAST (Easy, Attractive, Social and Timely) framework illustrate how nudges can be used to make secure choices more intuitive. Just as cybercriminals exploit cognitive biases in their social engineering and phishing attacks—leveraging biases like social proof, reciprocity, and authority—organisations can use these same biases positively. By embedding these principles in training, employees are nudged towards security behaviours that feel natural and compelling, increasing the likelihood they’ll adopt and maintain safe practices.
Embracing a people-centric approach that understands how individuals learn is essential in today’s workplaces, especially as the compliance-based tick-box approach to cybersecurity training becomes increasingly outdated and inadequate. Instead, the priority should be to align education with the natural ways individuals process information and make decisions through the adoption of behavioural science concepts like Nudge Theory and Choice Architecture. By fostering a more engaging and intuitive learning environment, organizations can not only enhance their cybersecurity posture but also empower employees to navigate the complexities of the digital age with confidence.
Tim Ward is CEO and Co-Founder of Think Cyber Security Ltd, an organisation applying behavioural science theory and evidence to driving secure behaviours with their Redflags® platform. He studied for an MBA with the Open University, holds a BSc in Computer Science with AI from University of Leeds, and a Post Graduate Diploma in Entrepreneurship from Cambridge University.
Tim has worked in IT for over 25 years with organisations including Logica, PA Consulting, Sepura and was previously Global Head of IT for the cyber division of BAE Systems (Detica).
Tim posts regularly on security awareness and behaviour change topics on LinkedIn and is proud to have helped ThinkCyber gain significant recognition for their Redflags® product: Teiss Awards for “Best Cyber Security Training & Awareness Product or Service” in 2024, NCSC for Startups Programme, Accenture Fintech Innovation Lab 2022; TechUK Cyber Innovator of the Year 2021; SC Award Europe (Best Professional Training Programme); and more!
November 2024
Would you like to contribute an article towards our Professional Knowledge Bank? Find out more.