top of page

Humans, the weakest link in cyber security


...apart from all the problematic tech, of course.

Please note, I originally wrote this article for @cyberscanner.

Combining my experiences studying psychology and sociology with all the talks and events I have attended, here are my thoughts about cyber security and it’s unfortunate habit on blaming users.

The blame culture

Psychology 101 tells us it’s much easier to blame someone else than it is to own up to our mistakes. But what happens when a whole industry engages in the same blame rhetoric?

There are many security nuances that seem obvious and simple to the trained eye. But to those unfamiliar with the product or service or, god forbid, technology, it certainly isn’t always obvious or simple, especially when help isn’t readily available.

And this blame culture the industry so often perpetuates, does nothing but infuriate the user, reduce engagement with good security ‘housekeeping’ and decrease user trust in products and services.

Ever been called stupid and thought, “you know what, you are right! If only I had followed your instructions more meticulously I wouldn’t be in this situation”?

No, didn’t think so.

Personally, if I know I’m being made fun of for not being able to do something properly and I don’t feel a product or service is tailored to my needs, I’m not only going to withdraw but I’m going to assume you’re ignorant.

Instead of blaming users, as an industry, we should be testing our products and services with the most diverse sample population possible, empowering our users with easy to access support, and most importantly educating the wider population, especially as technology permeates our lives more and more.

A reluctance to update

I don’t know about you, but the number of my friends and family who refuse to update their phone or computer is astonishing, especially considering many of them are entirely unaware that their reluctance to update leaves them at risk of a security breach.I mean, I only became aware of this fact very recently.

I think this reluctance can broadly be broken down into two main reasons:

1. “Stop changing the things I like”

How many times have you updated your phone only to find your little micro-universe had entirely changed? What’s this ugly new interface? Where’s the mail app gone? What the hell is “friends”?

I’m here telling everybody I know they must update when prompted, but at the same time I totally understand why they don’t. It’s truly baffling when companies seek feedback from their users and totally ignore it.

Why do so many companies change everything us users love about their products and force us to ‘accept’ all that we complain about?

To me, this comes down to organisational reluctance to test services and products with a diverse sample in their (user’s) real world environments. The confirmation bias in many of the industries leading companies is real.

And by confirmation bias, I mean when companies seek to confirm or only pay attention to information their aligns with their beliefs or confirms their assumptions.

It’s so important to pay attention to findings that disprove your hypothesis, they could prove much more useful than those that confirm what you already know.

2. “I had no idea updates had anything to do with security!?”

Simple.

Why is the most important information always in fine print?

As I’ve previously mentioned, the average Joe has no idea that updates are crucial to avoiding security breaches. Why is this the case? Lemme tell you one thing, it certainly isn’t the user's fault.

This information should be presented clearly, in accessible language and it should be the first thing the user reads when the lil update notification pops up.

Still having trouble portraying this information to your users? HINT: test with a more diverse population sample.

In light of the recent Cambridge Analytica scandal this becomes even more apparent. We know that “no systems were infiltrated and no passwords or sensitive pieces of information were stolen or hacked”.

Technically, the information collected wasn’t ever private, user’s just didn’t know, or they were not too bothered.

So, we are faced with a duel dilemma here: how do we get users to care more about their data? And how do we hold companies like Facebook accountable for ensuring their data policies are transparent?

Why aren’t we testing with real users? Where’s the user research?

So, yes as a user researcher I may be slightly biased here, but when a product or service keeps failing it’s usually because said product or service isn’t actually that useable.

Security housekeeping is becoming more and more difficult to maintain. It’s certainly not just OAPs who have trouble remembering all those passwords, let alone mastering the art of two factor authentications (2FA).

Biases aside, it’s not only imperative to test and research with users who fall victim to most the common causes of security breaches, it’s also important to test and research with so called tech experts.

It’s always surprising to me that humans make the most errors when they feel safe and in control and cyber security is no anomaly, therefore the know-it-all’s of cyber security could arguably be at the most risk.

With the evolving nature of the industry and with cyber criminals getting better and better at their job, the need to test more with a diverse sample population is ever apparent.

The impossible feat of minimising human error

Reducing human error may seem impossible and there will always be some degree of human error or robot error, depending on how you view the future. However, minimising human error is certainly possible. Take the following example...

You’re the CEO of a super cool start up and one of your valued employees makes a mistake that causes a security breach of some degree. Do you:

A) fire said employee, the security protocols are stated on the company intranet. Their fault for not complying, their mistake, their loss.

B) do not fire said employee, use the security breach as an example of what can happen when good practice is not followed. Not only that, but if your employees are making mistakes, they must need more training?

While I don’t think anyone would blame you for selecting option A, I would argue that this option actually leaves you worse off.

You learn nothing, your organisation learns nothing, your employees fear for their jobs more, stress increases, people feel overwhelmed, human error increases, more mistakes are made and hey presto, you’re back at square one and you still haven’t learned anything.

With option B however, you’re not only giving your employee a second chance (I’m not saying you forgive every mistake here), but you also give your company a chance to learn from this mistake. You could even offer employees training because we now know blaming the user is no use. To me this approach invokes a more supportive working environment where people feel they are allowed room to grow.

I would argue that working in such an environment is much more conducive to feeling empowered and in control, rather than over whelmed. It seems obvious to me that this is the environment where human error is likely to occur less.

What are your thoughts?


bottom of page