The human is not the weakest link – the human is the solution!


The human is not the weakest link – the human is the solution!

Security at the intersection of physical and digital worlds

Cyber security guru Bruce Schneier once stated: “The user's going to pick dancing pigs over security every time” . And we have heard similar expressions on how people deal with security: PEBCAK (Problem Exist Between Chair and Keyboard), people are the weakest link or the notion that we “need a patch for human stupidity”.

By Jelle Niemantsverdriet | 8th april 2019

How we are currently dealing with the human element in cyber security

I think this is a terrible element of our industry. This condescending attitude that seems to suggest that we as security professionals have taken care of everything, yet it’s only because of these ‘idiots’ (our colleagues, our customers – the people that pay our salaries) that our organisations fall victim to cybercriminals.

And you see this attitude reflected in how we deal with these people: we seem to think that the way to raise awareness is to tell people, then tell them again, tell them even louder and tell them in brighter colours and larger fonts.

But the underlying problem is that we don’t seem to understand how people think and how work works. Real work is actually performed using many shortcuts and deviations from the original design – something that we as security professionals can seemingly only respond to by throwing more advice and procedures into the mix.

As an example: “be cautious when you unexpectedly receive e-mail from people outside of your organisation, do not click on any links in those messages and especially do not open any attachments.” That’s more or less the content of every phishing awareness training under the sun - and it’s also the exact job description of your recruitment team. We don’t seem (or want) to understand how normal work works and instead we offload quite a few of our problems to the people in our organisations.

How to improve our approach towards people

So what should we do? Should we focus more on improving knowledge about cyber security? I think it’s not the case that people need more security awareness, it’s the other way around. Security needs more people awareness. And if humans are the weakest link, it’s the humans that work in security. We need to be smarter in how we incorporate the human element in everything we do.

Let me again challenge this ‘weakest link’ notion by stating that it’s the same behaviour that leads to failure and success. Or in other words, most of the behaviour we deem problematic in hindsight, is also the behaviour that makes our organisations perform effectively in the first place. There is no conscious decision-moment along the lines of: “OK, today is a good day to get hacked, let’s deviate from the rules and do something stupid”.

Research in a healthcare organisation – where 1 in every 13 interactions resulted in an incident of some form – concluded that issues like miscommunication, dose miscalculations, errors in operating a piece of technology or workarounds were found whenever there was such an incident. It was very tempting to point to this evidence of non-compliance as the underlying cause – however further research into the 12 other interactions revealed that those exact same deviations occurred when the outcome was more positive.

In other words: it’s too easy to – whenever there is an incident – retrace all the steps until you found a person pushing a button and then conclude that you found the culprit. You only found a symptom, not the underlying problem – and worse, you are blaming the victim.
What matters to make systems perform well is not the absence of such ‘violations’, but the presence of resilience in dealing with unexpected situations.

Understand the Complex Systems we work in to design more effective approaches

The environments we work in are prime examples of so-called ‘Complex systems’ which means they show emergent, non-linear behaviour . Or in other words, a small change can give a large effect or a large change can be dampened. And, even the same change repeated twice could lead to completely different outcomes. This means you cannot determine and predict the workings of the system by analysing the individual components – the behaviour of the system as a whole emerges from the interactions between those components.

Yet this is what we do all the time – we analyse the individual components (penetration testing anyone…?), put them in line and then expect that everything will be great. The way in which we manage teams and systems is still very much based on the first management insights from the industrial age, where the aim was to try and control the individual components and people via instructions, training and processes.

So how to incorporate this insight if we cannot add more signs and instructions. Should we look for ways to turn down the control and really trust the people ’on the sharp end’ to make the right judgment? Should we build in subtle nudges that convince people to behave more responsibly? Can we think of some ways to make parts of security even a bit funny and entertaining? The only way to find the answers to these questions, is by starting to experiment while in the meantime broadening and deepening our own insights, for example by incorporating insights from marketing, psychology and behavioural economics.

Let’s look at other disciplines to change our view of incidents

If we look at other disciplines, we should also consider the safety field, particularly in areas like aviation and healthcare. Traditionally this field has been dominated by ’Aiming for zero incidents’ campaigns that are largely built upon Herbert William Heinrich’s 1931 research which states that 88 percent of accidents are caused by ’unsafe acts of persons’. He furthermore created what became known as Heinrich’s pyramid , which graphically describes that in a group of 330 accidents 300 will lead to no injuries, 29 will result in a minor injury and 1 will result in a major injury. This research has been used to frantically go after the minor incidents based on the notion that if you reduce the size of the base of the pyramid (the no injuries/minor injuries) you will automatically reduce the number of major incidents.
Follow-up research however contested this long-standing ‘truth’. Somewhat counter-intuitively researches in Finland found that targeting zero-incidents actually increases serious injuries.

This can be explained if you accept the notion that safety is not an outcome – it is merely a capacity, a capacity to take on high risk work in a sensible way. Aiming for zero incidents reduces operational knowledge and gives rise to competing incentives (should you report that sprained ankle and sacrifice six months of ’incident-free operations’?).

So also in our work, we should embrace incidents and learn from them – either by using great publicly available resources like Verizon’s Data Breach Investigations Report , or even by creating them ourselves using concepts such as Netflix’ Chaos Engineering . Gaining more insight into what happens when minor things go wrong will ultimately reduce the risk of more serious incidents.

Tying it all together – how to make this work in practice

Let’s first of all see if there is some evidence for another setup. An Australian supermarket chain ran an experiment where they divided a number of stores into 3 groups when they realised their safety performance had reached a plateau. One group did not change anything related to safety and compliance. The second group removed all safety rules except for the ones that were mandated by law and the third group did the same and also added additional training on the underlying concept of the ‘New View” in safety.
The results: the second and third groups far outperformed the group that did not change – not only in safety numbers but also in terms of local ownership, financial performance and employee satisfaction.

I think there is a great opportunity here – but we first need to change. We have to shift our paradigm – people, users are not the enemy. We have to start thinking about how they do their work.

This requires a different mind-set in our teams. Far more focused on people, but also aiming for a ‘done is better than perfect’ ambition where we create experiments on a small scale instead of aiming for a technically perfect but unusable solution. Again: this is a change – all of a sudden we cannot focus on just what interests us, but on what really affects people at scale.

In doing this, let’s not try to solve problems with more compliancy and rules - but trust your employees to do what’s right (and empower them to do so).

So remember who we are doing this for. We are not in the business of protecting systems or networks – we are ultimately in the business of people protecting other people. This means we do not need to prevent every small incident, but instead we need to create organisations that can fail but fail gracefully and recover quickly.

I think the most important element here is to just start trying – we cannot afford to wait, especially with the merging of physical and digital worlds that we see happening all around us. In this landscape of merging worlds, let’s make sure security is right there at the intersection - ready to truly understand and serve people.

This article was written for the sixth European Cyber Security Perspectives report, a KPN initiative. The European Cyber Security Perspectives is a partnership of the Dutch National Police, the Dutch National Cyber Security Centre and a number of companies including Deloitte. Read the full report.


i. gives the original remark by Felten/McGraw and Schneier’s adaptation
ii. The Safety Anarchist, Sidney Dekker (2017) - Chapter 6.
iii. for more information on the Cynefin framework which describes the differences between the various types of systems.
iv. Industrial accident prevention, H.W. Heinrich et al. (1980)
v. Saloniemi, A. and Oksanen, H. (1998) Accidents and Fatal Accidents—Some Paradoxes. Safety Science, 29, 59-66.
vii. and provide good insights

Did you find this useful?