MonthMay 2016

Cybersecurity’s weakest link: humans [Published in The Conversation]

There is a common thread that connects the hack into the sluicegate controllers of the Bowman Avenue dam in Rye, New York; the breach that compromised 20 million federal employee records at the Office of Personnel Management; and the recent spate of “ransomware” attacks that in three months this year have already cost us over US$200 million: they were all due to successful “spearphishing” attacks.

Generic – or what is now considered “old school” – phishing attacks typically took the form of the infamous “Nigerian prince” type emails, trying to trick recipients into responding with some personal financial information. “Spearphishing” attacks are similar but far more vicious. They seek to persuade victims to click on a hyperlink or an attachment that usually deploys software (called “malware”) allowing attackers access to the user’s computer or even to an entire corporate network. Sometimes attacks like this also come through text messages, social media messages or infected thumb drives.

The sobering reality is there isn’t much we can do to stop these types of attacks. This is partly because spearphishing involves a practice called social engineering, in which attacks are highly personalized, making it particularly hard for victims to detect the deception. Existing technical defenses, like antivirus software and network security monitoring, are designed to protect against attacks from outside the computer or network. Once attackers gain entry through spearphishing, they assume the role of trusted insiders, legitimate users against whom protective software is useless.

This makes all of us Internet users the sole guardians of our computers and organizational networks – and the weakest links in cyberspace security.

The real target is humans

Stopping spearphishing requires us to build better defenses around people. This, in turn, requires an understanding of why people fall victim to these sorts of attacks. My team’s recent research into the psychology of people who use computers developed a way to understand exactly how spearphishing attacks take advantage of the weaknesses in people’s online behaviors. It’s called the Suspicion, Cognition, Automaticity Model (SCAM).

We built SCAM using simulated spearphishing attacks – conducted after securing permission from university research supervision groups who regulate experiments on human subjects to ensure nothing inappropriate is happening – on people who volunteered to participate in our tests.

We found two primary reasons people are victimized. One factor appears to be that people naturally seek what is called “cognitive efficiency” – maximal information for minimal brain effort. As a result, they take mental shortcuts that are triggered by logos, brand names or even simple phrases such as “Sent from my iPhone” that phishers often include in their messages. People see those triggers – such as their bank’s logo – and assume a message is more likely to be legitimate. As a result, they don’t properly scrutinize those elements of the phisher’s request, such as the typos in the message, its intent, or the message’s header information, that could help reveal the deception.

Compounding this problem are people’s beliefs that online actions are inherently safe. Sensing (wrongly) that they are at low risk causes them to put relatively little effort into closely reviewing the message in the first place.

Our research shows that news coverage that has mostly focused on malware attacks on computers has caused many people to mistakenly believe that mobile operating systems are somehow more secure. Many others wrongly believe that Adobe’s PDF is safer than a Microsoft Word document, thinking that their inability to edit a PDF translates to its inability to be infected with malware. Still others erroneously think Google’s free Wi-Fi, which is available in some popular coffee shops, is inherently more secure than other free Wi-Fi services. Those kinds of misunderstandings make users more cavalier about opening certain file formats, and more careless while using certain devices or networks – all of which significantly enhances their risk of infection.

Habits weaken security

Another often-ignored factor involves the habitual ways people use technology. Many individuals use email, social media and texting so often that they eventually do so largely without thinking. Ask people who drive the same route each day how many stop lights they saw or stopped at along the way and they often cannot recall. Likewise, when media use becomes routine, people become less and less conscious of which emails they opened and what links or attachments they clicked on, ultimately becoming barely aware at all. It can happen to anyone, even the director of the FBI.

 

When technology use becomes a habit rather than a conscious act, people are more likely to check and even respond to messages while walking, talking or, worse yet, driving. Just as this lack of mindfulness leads to accidents, it also leads to people opening phishing emails and clicking on malicious hyperlinks and attachments without thinking.

Currently, the only real way to prevent spearphishing is to train users, typically by simulating phishing attacks and going over the results afterward, highlighting attack elements a user missed. Some organizations punish employees who repeatedly fail these tests. This method, though, is akin to sending bad drivers out into a hazard-filled roadway, demanding they avoid every obstacle and ticketing them when they don’t. It is much better to actually figure out where their skills are lacking and teach them how to drive properly.

Identifying the problems

That is where our model comes in. It provides a framework for pinpointing why individuals fall victim to different types of cyberattacks. At its most basic level, the model lets companies measure each employee’s susceptibility to spearphishing attacks and identify individuals and workgroups who are most at risk.

When used in conjunction with simulated phishing attack tests, our model lets organizations identify how an employee is likely to fall prey to a cyberattack and determine how to reduce that person’s specific risks. For example, if an individual doesn’t focus on email and checks it while doing other things, he could be taught to change that habit and pay closer attention. If another person wrongly believed she was safe online, she could be taught otherwise. If other people were taking mental shortcuts triggered by logos, the company could help them work to change that behavior.

Finally, our method can help companies pinpoint the “super detectors” – people who consistently detect the deception in simulated attacks. We can identify the specific aspects of their thinking or behaviors that aid them in their detection and urge others to adopt those approaches. For instance, perhaps good detectors examine email messages’ header information, which can reveal the sender’s actual identity. Others earmark certain times of their day to respond to important emails, giving them more time to examine emails in detail. Identifying those and other security-enhancing habits can help develop best-practice guidelines for other employees.

Yes, people are the weakest links in cybersecurity. But they don’t have to be. With smarter, individualized training, we could convert many of these weak links into strong detectors – and in doing so, significantly strengthen cybersecurity.

A version of this post appeared here and on other leading media: https://theconversation.com/cybersecuritys-weakest-link-humans-57455

Why we need a cyber wall [Published in CNN]

Donald Trump had the audience at his rally in California on Thursday chanting “build that wall,” a reference to his pledge to build one along America’s southern border. But while this pledge might not be in the country’s best interests, there is actually somewhere that we really could use a wall: cyberspace.

After all, this is where most of us spend much of our time these days. And it’s also where all manner of criminals — from “hacktivists” to state-sponsored espionage units — lurk. Cyber attacks have already breached many major corporations, infrastructure facilities and military installations. And by now, every one of us has probably been targeted in some way, some of us repeatedly.
All this is costing governments and individuals enormous amounts every year. One study estimated the cost to the global economy from cybercrime at more than $400 billion each year, a figure that is only likely to rise as more and more transactions are conducted online. But there is a way to stop many of these attacks, one that requires shoring up a fundamental weakness of the Internet that hackers exploit: the mechanism used by computer systems for authenticating users.
In the real world, authenticating someone is easily done by checking something the person already has — a credit card, a driver’s license, a passport — to serve as irrefutable proof of their identity.
Online transactions, however, rely on a system of credentialing, usually someone entering a login and password combination that only they are supposed to know. There is nothing the user can show that can serve as definitive proof of identity, meaning if anyone else uses these credentials, there is virtually no way of distinguishing them from the legitimate person.
As a result, the vast majority of cyber attacks are attempts to steal credentials, either directly from people or indirectly from the servers of organizations storing this information. What we need instead is an online mechanism for authenticating users that is founded on some real-world identifiers that would essentially create a virtual wall against hackers. This is precisely what Estonia, today one of the most technologically progressive nations in the world, successfully did.
When it gained independence from the Soviet Union, many Estonians didn’t even have a phone line, let alone a mobile phone. However, the newly formed government leapfrogged the usual development steps through a series of technologically progressive initiatives that brought its entire business, communication and governance systems online.

To prevent stolen credentials from undermining these, the government implemented a “Public Key Infrastructure” (PKI), basically a nationwide electronic ID card with an encrypted key that securely identifies users to servers online. Swiping the card in addition to entering login credentials works like a real-world authentication system, where individuals present their credentials along with something only they can possess.

Thanks to this, Estonia’s 1.3 million citizens can do everything from file taxes to fill their prescriptions, sign contracts, and even vote online — confident that no one is impersonating them. This has led to significant savings, such as tax returns being processed in less than two days, and has also spurred tremendous innovation, with companies such as Skype and TransferWise among the numerous tech start-ups that begin there each year.
While other European nations have followed Estonia’s lead, attempts in the United States, some dating back to the mid-1990s, remain stymied by our nation’s size and a pervasive distrust in government-led centralization. But there might be a solution, one that utilizes a unique identifier but does not involve the government: our cellphones.
Virtually every one of us has a mobile phone, and not only are our phone numbers tied to our credit history — and by extension, our identity — but many mobile services also support SIM cards that can store encrypted data. Furthermore, many of today’s handsets require biometrics like fingerprints for access, making it impossible to use them without authorization.
Thus a PKI could be linked to a specific cellphone number we choose, in the way the popular app WhatsApp does. This system could be developed by mobile service providers like Verizon or AT&T, who cover most of the nation’s users, or by handset makers like Apple and Samsung, whose mobile payment solutions could further benefit from such authentication.
Of course, although the development of a PKI would create a significant hurdle for hackers, it still won’t protect users who are careless about their devices. Nor can it protect users who click on malware-laden spearphishing emails that open back doors into computers, completely circumventing the hacker’s need for user credentials. The reality is that all Internet users — the weakest links in cyber security — will need to lay the final brick in the virtual wall.
How?
For a start, by learning how to spot and report suspicious phishing emails. Whenever possible, we should also enable security protections such as two-factor authentication — an analogue to PKI, where users are sent a pin-number to any phone or device they choose, to be entered during login. And more generally, we can develop better “cyber hygiene”. This means adopting cyber safe behaviors such as using online password vaults to store and create complex passwords, using separate email accounts for important logins, and using a secure browser rather than email client to log into these accounts.
Regardless of who ultimately wins the presidency, protecting cyberspace must be a priority. And it will require a wall not of bricks and barbed wire, but a virtual one that we all help build, using our ingenuity, leveraging technology, and developing better habits in cyber space.

*A version of this post pear on CNN: https://www.cnn.com/2016/05/02/opinions/build-cyber-wall-vishwanath/