Cyber Security: On the Risk-Blindness of Users
December 1, 2014 by Bill Rosenthal

The prescription drug commercials on television always cite the product's possible side effects in chilling detail. The narrator might even intone, "serious problems, even death, have been reported." And yet, how many of us actually take any note of those warnings?

Amid the beautifully staged images of couples having fun together, people walking dogs, and people playing with grandchildren, it’s almost impossible to focus on a medicine’s drawbacks. This is especially true when there is a parallel message that a product can restore your health – let you walk your dog, play with your grandchildren, or go on dates when you thought you were too sick to do those things. The advertisers may be required to tell us about the half dozen ways in which their drug could kill us, but they count on our propensity to ignore those warnings. Otherwise, they wouldn’t sell nearly as much product.

A research report in the Journal of the Association of Information Systems a couple weeks ago found a similar effect when it comes to using computers. (The link is to an abstract; the article is behind a paywall.)

There has been considerable research into user perceptions of security risk, but almost all of it relies on user self-reporting. Researchers at Brigham Young University went a step further and hooked their experimental subjects up to an EEG machine. As you may expect, they found the EEG readings were a better predictor of user behavior than what users said about their own perceptions of risk. In fact, the article shows there’s no correlation between what people say they believe about risk and how they act when they are engaged in computing tasks. This changes, however, if they fall victim to a security threat. Then they become more careful. 

Here’s how they studied it. After the researchers tested the users’ risk perceptions, the users worked at a particular website, viewing images of Batman and noting whether they thought the images were animated or photographic. As the users worked, they would receive web browser security warnings, such as one patterned on that of Google Chrome: “This is probably not the site you are looking for!” followed by a brief explanation and then providing two buttons: “No, don’t proceed” and “Yes, proceed anyway.” The researchers measured how often users clicked each button and compared the actions to their risk perception. Then, half-way through the task, the system interrupted the user without warning to display an authentic-looking message that their laptop had been hacked and was about to be destroyed. Then they measured the users’ risk perception again. 

The researchers were looking at something specific: how users’ perception of risk affected their behavior and whether the perception was influenced by being victimized. But what got my attention was their discovery that security warnings are ineffective for an appreciable number of users. Even if the users who disregarded the warnings were a small minority, it only takes one lapse in user judgment to compromise an entire system.

I am speculating, but I think this may be less a problem of risk perception than one of focus. Most people, given a task, want to complete it. When you interrupt their work with a security warning but give them the option to disregard it, at least some of them will dismiss the warning so they can get back to work.

The BYU researchers also discovered the value of training to correct this problem. The “you’ve been hacked” message they presented to the users was, in effect, an excellent training simulation, and it changed user perceptions dramatically.

Whether or not you decide to use such a simulation in training your users, security training and certification is vital to the viability of your business. You probably didn’t need a research report to tell you that.