In late 2014, in the aftermath of the Sony Pictures Entertainment breach, I had advocated the development of a cyber breach reporting portal where individuals could report suspected cyber incidents. Such a system, I argued, would work as an early warning system so IT could be made aware of an attack before it become widespread; it would also work as a centralized system for remediation, so affected victims could seek help.

Since then many organizations all over the world have developed such portals for their employees to report suspected breaches. These range from web reporting forms and email in-boxes to 24-hour help-desks where employees can find remedial support.

While there is little direct research on how well these portals work, extant reports points to a rather low utilization rate. For instance, Verizon’s 2018 Data Breach Investigations Report (DBIR) found that among 10,000 employees across different organizations who were subjects of various test phishing campaigns, fewer than 17% reported it to IT. My own experience advising firms on their user vulnerability reduction initiatives have found similar low reporting rates.

To counter this, many CSOs have resorted to incentives and punishments to enhance employee reporting of suspect emails and cyber activities. But the question—one that I am often posed when advising organizations on IT security—is which of these really work?

First, let’s begin with punishments. We know from a century of research on human motivation that punishments tend be salient but not necessarily effective in motivating people the right way. That means people remember threats but it doesn’t help, especially when the task at hand requires mental effort.

For instance, when the former head of the NSA Admiral Rogers famously remarked that individuals who fall for a phishing test should be court-martialed— it sure got noticed and widely reported. But such actions lead to fear, anxiety, and worry, not more thoughtful action. This is precisely why phishing emails have warnings and threats in them—because when people focus on the threats, they end up ignoring the rest of the information on the email that could reveal the deception.

In surveys I have conducted in organizations that use punishments to foster reporting, the vast majority of users reported changing how they use email: many were avoiding opening email at certain times of the day, were waiting for people to resend their original email requests, or, in some cases, forwarding work emails to their non-IT authorized mobile devices and email accounts.

These may be effective ways of avoiding getting caught in a phishing test, but not necessary good for organizational productivity and cybersecurity.

On the flip side are rewards for reporting phishing emails. Some organizations have used monetary compensations, others have experimented with token rewards, and some others with mere recognition of the employee who reported. Which from these work the best? The surprising answer: recognition.

The reasons for it are as follows. First, monetary compensation puts a dollar amount to cybercrime reporting—a value that is difficult to determine. That is, do we estimate the value of a report based on the time the employee sent in the report, i.e., immediately after the attack versus much later, the accuracy of their report, or the size of the breach it prevented? Each estimation process has its own pitfalls and they all also focus on the report rather than the employee doing the report or what it means for them to actually perform the reporting function.

Monetary incentives have another problem: they turn reporting into a game. This changes the employee’s motivation, who rather than becoming more careful about scrutinizing incoming emails, which is the indirect purpose of such reporting, learn that more reporting increases their chances of winning a prize.

Consequently, many employees report anything they find troubling, sometimes emails even they know are simply spam. This, on the one hand, significantly increases the load on the IT helpdesks and decreases their chances of catching a real phishing email. On the other hand, too many unnecessary reports decrease the odds of winning a reward, which over time reduces the employees’ motivation for reporting.

Compared to this, social rewards such as public praise, recognition and appreciation through announcements acknowledging those users who have reported suspicious emails, along with appropriate communication, shows the value of this reporting works better than all other approaches.

This is because monetary incentives appeals to employees’ base needs, which are already met through their jobs, while social recognition appeals to higher order needs—what the famous motivational psychologist Abraham Maslow termed “esteem needs”: the human need for achievement, for respect, for prestige, and for a sense of accomplishment.

Being publicly recognized for reporting suspect emails makes employees feel valued for their effort at reporting, which on the face of it is an act of altruism that has little direct relationship to their workflow or productivity. Effectively communicating the value of their reporting, thus, focuses attention to the employees doing the report.

This has enduring effects, influencing both the employee being feted while also motivating others to follow their lead, which altogether leads to a culture of cyber safety within the organization.

As email-based attacks targeting organizations become more sophisticated, employees are the first, and at times the only, line of defense against them. Effectively harnessing the power of employees through the use of appropriate strategies for incentivizing reporting is the difference between organizations that are reacting to cyber-attacks and those that are proactively stopping them.

 

* A version of this post appeared in InfoSecurity Magazine