- The failures that led to the Colonial Pipeline ransomware attack May 18, 2021
- The Colonial Pipeline Hack Was Avoidable May 12, 2021
- Mobile telephony is dying [Published in iPswitch] October 1, 2020
- Data Security In The Cloud: Part 2 [Published in iPswitch] September 15, 2020
- Data Security in the Cloud: Part 1 [Published in iPswitch] September 15, 2020
- Why do we still teach our children ABC? [Published in Medium] September 15, 2020
- COVID-19’s Lessons About Social Engineering [Published in Dark Reading] June 14, 2020
- Improving Everyone’s Ability to Work from Home After the Pandemic [Published in IPSwitch] May 12, 2020
- Stopping the Dark Triad from impacting our response to COVID-19 [Published in IPSwitch] May 12, 2020
- Stop saying “Cyber Hygiene is like personal hygiene” [Published in Medium] February 15, 2020
- It’s 2020: Do we need more cyber hygiene? [Published in InfoSecurity Magazine] January 23, 2020
- How much cyber hygiene do you need? [Published in Medium] October 1, 2019
- The troubling implications of weaponizing the Internet [Published in Washington Post] July 13, 2019
- Why smartphones are more susceptible to social attacks [Published in 2019 Verizon DBIR] May 10, 2019
- Why do so many people fall for fake profiles online? [Published in The Conversation] September 21, 2018
- Spearphishing has become even more dangerous [Published in CNN] September 14, 2018
- To reward, or not to reward [Published in InfoSecurity Magazine] August 2, 2018
- The impact AI will have on democracy [Published in CNN] March 6, 2018
- AI will replace trucker, retail workers, journalists–and you and I [Published in CNN] February 8, 2018
- It’s not just fake news, Facebook, or Twitter! It’s the Internet’s Dark Triad we should be worried about. [Published in CSO Online] November 30, 2017
By now, we have all heard about last week’s Colonial Pipeline ransomware attack that caused a shutdown of the 5,500-mile pipeline responsible for carrying fuel from refineries along the Gulf Coast to New Jersey. The disruption led to stranding gasoline supplies across half the East coast, raising gas prices at the pump and to some states preemptively declaring an emergency.
After six days, the company announced the pipeline launched the restart of its operations Wednesday evening and that it’ll take several days for service to return to normal. But Colonial’s information technology (IT) department — and the cybersecurity community as whole — could have ensured this never happened.
The attack was stoppable because ransomware isn’t new. By 2015, ransomware was already leaving a trail of corrupted data from victims all over. The infamous Sony Pictures hack in late 2014 was due to it, and there had already been attacks on a string of hospitals and law firms. In 2016, I wondered if that would be the year of online extortion.
I was wrong because it wasn’t just 2016 — it’s been every year since.
In 2020, nearly 2,400 local governments, health care facilities and schools were victims of ransomware. The average downtime because of it was 21 days, with an average payment of $312,493 — a 171% increase over 2019, according to an analysis by the Institute for Security and Technology.
We cannot afford this. Neither at the gas pump nor as a nation where most are already economically strained.
I also offered a series of suggestions. Fixing the technical problems (by better securing networks and computing systems), improving national and international law enforcement efforts (by centralizing breach reporting, coordinating remediating, strengthening legislation) and fixing the user problem (by applying social science to educate users and improve their cyberhygiene). My hope was to get policy makers and the cybersecurity community to focus on these issues — because it would have stopped this attack from ever happening.
Sadly, the cybersecurity community focused on what they like to focus on — technology.
Like the parable of the man searching for his keys under the streetlight rather than near his car where he’d lost them, the security community’s efforts focused on the hacker’s technical sophistication, the complexity of their malware and the byzantine lines of code they had to rewrite. Their solutions were commensurately complex: more complex encryption algorithms, more granular network monitoring and more layers of software.
At the policy front, late last month, a Ransomware Task Force made up of representatives of technology firms submitted an 81-page report to President Joe Biden. Priority recommendations included the need for aggressive enforcement, establishing cyberresponse and recovery funds and regulating cryptocurrency. But other than creating a national awareness campaign and providing more security awareness training in organizations, there was little proactively called for to protect the primary point of ingression — users.
All of these — be it the technical fixes or the policy recommendations — while pertinent and necessary to adopt, merely stop hackers after they are in the network or prosecute them after the fact.
Ransomware attacks occur because of how easy it is for the attacker to come into a computing network. They do so using spear phishing that deceives users into clicking on a malicious hyperlink or attachment. It’s how almost 50% of all ransomware gets a foothold into networks, according to Verizon’s 2020 Data Breach Investigations Report.
And according to the FBI’s Internet Crime Complaint Center (IC3), the number of phishing attacks doubled in 2020 as more of us work from home, away from organizational IT protections. Hackers stole people’s identity, corrupted data and extorted money — with estimated losses of $4.2 billion.
All this while we tried to fight technology fires after they have raged or strike back with even more technology.
The only way to stop spear phishing, and with it ransomware, is to deal with what we have ignored — or merely paid lip service to — the user. We need more than just media awareness campaigns. Because by now, every user is aware of phishing. Besides, much of our present training teaches users about attacks that have occurred, not the attacks that are yet to come, because no one, not even people in IT, know what they will be.
We need to invert the cybersecurity paradigm. Our policies cannot work from the technology organizations downwards, where standards and policies are created by a software manufacturer, a security company or a federal organization. IT security is not just a technological problem that can be gunned down with bigger technological bullets. It’s a user problem — one that can only be resolved by understanding users, who is at risk, why they are at risk and by helping them reduce it.
This requires us to put users first and work upwards towards solution. We need to apply the social science of users — much of which already exists — towards the problem. We already know the triggers in emails and messages that lead to deception in users. We know how users’ thinking, their cyberrisk beliefs and their technology habits influence spear phishing detection. And we also know how to measure and assess their levels of cyberhygiene.
But what we haven’t done is apply this towards protecting users. We can do this using the accumulated knowledge to build a user risk scoring system. This can work like financial credit score, only for cyberrisk.
Such scores would quantify risk and help users understand their level of vulnerability. It would also help organizations understand what users lack so they can be better protected. For instance, if someone lacks awareness or knowledge in an area, they can be provided this. However, if someone suffers from poor email-use habits, this can be addressed by changing their work patterns and improving their email hygiene.
In this way, policies, protections, even data access can be premised on user risk scores. And because these scores are based on the users’ mental and behavioral patterns, the scores are naturally impervious to changes in technology, making them future-proofed.
While the approach for doing this has been documented, it hasn’t been widely implemented. The reason for it is that the security community, made up mostly of engineers, doesn’t focus on users. For the engineer’s hammers, everything technical is nail. Spear phishing is considered a user problem — an external factor to the security model. And we have suffered the ramifications of this. It is why in 2014 the Sony Pictures hack happened. It is why the Colonial Pipeline hack occurred. And it is why such attacks will continue, until we change the security paradigm.
One of the many lessons of the pandemic is that simple solutions based on sound science work. Even as scientists applied cutting-edge pharmaceutical science to develop vaccines, simple social-behavioral solutions — wearing masks, washing hands, maintaining safe social distances — have been key to stop the spread Covid-19.
If we are lucky, we might just pay a small price at the gas pump because of the Colonial Pipeline ransomware attack. But there’s surely more coming. The social science fix for it already exists. The cybersecurity community must implement it.
Of course, while this attack is new, such attacks aren’t. The Sony Pictures hack was also ransomware. And in 2016, there were many such attacks occurring. In response to them, I’d written a piece on CNN asking if 2016 was the year of online extortion? This was after ransomware attacks on hospitals in California and Kentucky.
I had provided pointed solutions and called for a focus on users, rather than solely on technology. After all, they are the ingress points for ransomware, which almost always coming via spear phishing.
Unfortunately, every year since 2016 has led to bigger and more successful ransomware heists. The Verizon DBIR 2020 shows exactly how these attacks come in–and they come in through spear phishing.
And all along, we have–and we continue to– ignore user weaknesses and focus on the technical issues—almost always after a crippling breach.
This time, we are all paying a direct price at the gas pumps. Who knows what’s coming next?
The solutions from then are just as pertinent today. Here’s my article in CNN from 2016. [Original can be found on the CNN website]
“This week, a hospital in western Kentucky was the latest organization to fall victim to a “ransomware” attack – a class of malware that encrypts all the files on a computer, only releasing them when a ransom is paid to the hacker holding the encryption key.
In this case, the hospital did not pay up. However, other hospitals, law firms, small businesses and everyday citizens have already paid anywhere from $200 to $10,000 in ransoms. Indeed, based on complaints received between April 2014 and June 2015, the FBI estimated that losses for victims from just one of these malware strains were close to $18 million.
Sadly, this year could well be worse.
Ransomware has existed for some time, the earliest dating back to the late 1980s. Back then, most was developed by enthusiasts – individuals testing out their skills. In contrast, today’s ransomware is often developed by global software teams that are constantly updating their codes to evade anti-virus software and selling them as off-the-shelf products.
Already, newer strains appear capable of infecting mobile devices, of encrypting files stored on cloud servers through mapped, virtual drives on computers, and of transitioning to the “Internet of Things” – infecting gadgets like watches and smart TVs that are going online. In the near future, the likelihood of an attack locking us out of our car, or worse yet in it, while we drive, demanding an immediate ransom, is becoming increasingly possible.
Thanks to the Internet, this malware-for-hire is available to virtually anyone, anywhere with criminal intent. Making things easier for hackers is the availability of Bitcoins, the online currency that makes monetary transactions untraceable. And making things even easier for them is our inability to stop spear phishing – those innocuous looking emails whose attachments and hyperlinks conceal the malware.
All this makes anyone with minimal programming skills and a free email account capable of inflicting significant damage, and with everyone from presidents to pensioners using emails today, the virtual pool of potential victims is limitless. No surprise then that cybersecurity experts believe that 2016 could well be the “Year of Online Extortion.”
But we can stop these insidious attacks, if everyone – individuals, organizations and policy makers – works towards a solution.
First, everyone must be taught to spot, sequester, and deal with spear phishing emails. This requires cybersecurity education that is free and widely available, which is presently not the case. While different training programs exist, most cater to large organizations, and are outside the reach of households, senior citizens and small businesses, who remain vulnerable.
What we also need is training that helps people develop better “cyber hygiene.” This includes teaching people to frequently update anti-virus software, appropriately program firewalls, and routinely back up their computers on discs that are then disconnected from the network. In addition, people should be taught how to deal with a ransomware attack and stop its spread by quickly removing connected drives and disconnecting from the Internet.
Second, organizations must do more to protect computer networks and employees. Many organizations continue to run legacy software, often on unsupported operating systems that are less secure and far easier for hackers to infiltrate. Nowhere is this problem more pressing than in small businesses, health care facilities, and state and federal government institutions, which is why they are the sought-after targets of ransomware.
Besides updating systems, organizations need to overhaul the system of awarding network privileges to employees. The present system is mostly binary, giving access to employees based on their function or status in the organization. Instead, what we need is a dynamic network-access system that takes into account the employees’ cyberrisk behaviors, meaning only employees who demonstrate good cyber hygiene are rewarded with access to various servers, networks, and programs through their devices.
Finally, policy makers must work to create a cyber crime reporting and remediation system. Most local law enforcement today is ill-equipped to handle ransomware requests, and harried victims usually have limited time to comply with a hacker’s demand. Many, therefore, turn to their family and friends, who themselves have limited expertise. Worse yet, some have no choice but to turn to the hacker, who in many cases provides a chat window to guide the victim through the “remediation” process.
What we urgently need is a reporting portal that is locally available and staffed by cybersecurity professionals, so people can quickly report a breach and get immediate support. Such a system currently exists, in the form of the existing 311 system for reporting nonemergency municipal service issues. It’s a system that has already been adopted by many cities in the nation, and allows for reporting via email, telephone, and smartphone apps. Strengthening this system by providing it the necessary resources to hire and train cyber security professionals, could go a long way towards stopping ransomware attacks that are now making their way past Main Street to everyone’s homes.
Perhaps the best way to look at the problem is this: How safe would we feel in a city where people are routinely being held hostage? Well, cyberspace is our space. And we have to make it safe.”
Verizon, AT&T, T-Mobile–I hope you are reading this. Mobile telephony, your primary business model of enabling phone calls and text messaging, is dying.
Your internal data likely says otherwise. Growth just appears to be everywhere: 5G’s enhanced mobile broadband speeds are coming alive, more people are subscribing with more gadgets, and some 60% of Americans are in mobile-only households–phenomena that were inconceivable two decades ago. Not to mention, the surge in network use due to the pandemic.
With this kind of growth, why would I say mobile telephony is dying? There are a few good reasons.
Text Messaging and Messaging Apps Reign Supreme
For one, people have stopped calling each other on their phones and are instead messaging. Note that I said messaging, which uses the Internet, and not texting that needs your network.
Messaging is increasingly popular, even preferred. You can be just as professional on it as you can be informal, and express your personality more richly, using emoticons, emojis, memojis, tapbacks, and more. And unlike phone calls, you don’t need to ask about the weather; nor do you need salutations, signatures, or statutory valedictions, as we do with email.
Messaging can be short, unintrusive, and direct. So, it works just as well for messaging colleagues down the hallway, family members in the other room, and friends in faraway places. For the security-minded, leading services are end-to-end encrypted, something that neither traditional texting nor its newer RCS incarceration in Google Messages supports.
Because of this, mobile messaging has been growing exponentially and, following email, accounts for roughly half of all mobile Internet usage. More importantly, 81–80 percent of millennials–the generation that came to age with social media and iDevices–use messaging apps like Facebook Messenger on mobile devices.
People Prefer Video Calls Instead of Phone Calls
Secondly, when people do call, they increasingly use video rather than voice, especially for making group calls. Video calling showed an estimate 175 percent increase in usage in the last 3 years, with one in four millennials using it on a daily basis. And this was before the pandemic made it a necessity and ushered in newer, arguably easy to use, apps such as Zoom, and made group calls for work, school, even television interviews, mainstream. While group video calls can be made on mobile devices, they are better on larger-screened laptops and tablets, which is bad news if you are a cellular provider–because none of these, again, require your service.
Not so long ago, the cellular providers dictated what people could use on their network. Today, the power has shifted to the gadget makers who provide the cameras, the noise-canceling earphones, the ability to seamless switch between devices when making video calls–and shape the experience. Because of this, the mobile phone number is becoming less important, while the device and how well it can sync-up with other devices owned by the user, is central to the user’s quality of experience.
Finally making things worse is that cellular networks have not been able to stem abuse on their network. Already in 2020, 58 billion robocalls were made to American residents’ mobile phone, for an average 80.6 calls per person — and this was a 22 percent increase over the previous year. Many are phishing calls and texts that appear to come from local area codes and are attempts at deceiving users into paying fraudulent IRS dues, threatening various dire legal actions, or luring users into opening malicious hyperlinks in text messages.
Phishing And Robocalls Deteriorates Trust In Cellular-Based Calling
Phishing is made possible by Internet-based telephony, which makes it possible for attacks to be fomented from anywhere in the world and avoid prosecution. Also enabling them is our caller-ID system, which was originally developed for the home phone network when there were few providers who could all be trusted. Caller-ID’s thus assumed all callers were honest and displayed whatever number was programmed in by them. Today, this makes it possible for anyone using computerized phone-dialers to obfuscate the true source of phone calls and fake the phone numbers that show up on our caller IDs.
The phone carriers, however, don’t recognize the nuisance these calls cause. So, even though they have developed apps to block such calls, they charge an additional fee for them. But consumers, long sold on different cellular networks’ delivery quality with “Can you hear me now?” promises, are unwilling to pay for something they believe should be dealt with by the carriers. Thus, rather than pay for the app, users keep their mobile devices on silent-mode, ignoring incoming calls and texts. For many millennials, this likely furthers their shift to messaging and video calling.
The risk of being silenced, especially by this important consumer psychographic, could have a profound impact on the future of the cellular network. In the past, consumers in similar age cohorts have shown to be relatively quick in moving away from services that didn’t consider their interests ahead of the organization’s bottom-line.
Much like cellular networks today, back in the 1990s, the home phone networks reigned supreme. Their primary business was long distance, for which they kept charging exorbitantly. In 1997, long-distance rates at 12–25 cents per minute, up 25 percent since 1992. The future looked so bright that the former head of AT&T’s long-distance, Joseph P. Nacchio, remarked: “Long distance is still the most profitable business in America, next to importing illegal cocaine.”
At the time, there were just 50 million mobile subscribers, all of who also had a home phone. Within a few years, that generation of 22–40-year old’s quickly adopted Internet and mobile telephony, which all but killed the traditional phone business.
Today’s millennials are not only in the same age cohort but they are also now the majority of American residents. They have already dropped their home phones for mobile, and their cable subscriptions for streaming video. Their cellular phone plans may well be next.
*A version of this post appeared here: https://blog.ipswitch.com/mobile-telephony-is-dying-heres-why
Vulnerabilities in cloud-sharing services stem from the usage of multiple cloud services because of which users need to keep adapting and adjusting their exceptions.
In part 1, I discussed some major vulnerabilities using cloud-sharing services caused. This included routine cloud usage leading to users opening emails from unknown addresses; complying with form-emails with no personalized messages or subject lines; clicking on unverifiable hyperlinks in emails, and violating cyber safety training drills that cautions against all the aforementioned actions. Security flaws in cloud-sharing services are, not in the user, but the developmental focus of various cloud services.
Different Authentication Considerations for Cloud Services
Some cloud services like Google Drive are focused on integrating their cloud offerings with their alphabet-soup of services. Others, such as Dropbox, are focused on creating a stand-alone cross-platform portal. Still others, such as Apple, are focused on increasing subscription revenues from their device user population.
In consequence, not only does each provider prioritize a different aspect of the sharing process, but this larger goal also comes before the user–who is nothing more than a potential revenue target.
Changing this means more than implementing more robust authentication protocols and encryption standards. These are necessary, but they do little to reduce vulnerabilities that are rooted in the user adaption process. If anything, they make users have to adapt to even more varying implementations. Improving resilience in cloud platforms cannot be done on a piece-meal basis; it requires a unified effort by cloud service providers.
Integrating Different Cloud Services
Here, industry groups such as the Cloud Security Alliance can help by bringing together various cloud providers, and taking a holistic look into how users adapt to different cloud-use environments and estimate risk on them.
A big part of this endeavor will involve getting to the root of users’ Cyber Risk Beliefs (CRB): their mental assumptions about the risks of their online actions. We know from my research that many of these risk beliefs are inaccurate. For instance, many users mistakenly construe the HTTPS symbol to mean a website is authentic, or that a PDF document is more secure because they cannot edit it, than a Word document.
We need to understand how CRB manifest themselves in cloud environments. This involves answers to questions such as whether users believe that certain cloud services are more secure than others? And whether they think that such services render the sharing of documents or the sharing of certain types of documents through them safer?
Answers to such questions will reveal how users mentally orient to different cloud services, what they store on them, and how they react to files shared through them. For instance, if users believe a specific portal makes documents safe, they might be more willing to open files that purportedly come from such portals in a spear-phishing email. Beliefs such as these might also influence how users enable various apps on to cloud portals, what they store online, and how careful they are about their stored data. Because CRB influences the adaptive use of different cloud services, understanding them can help design a safer cloud user experience.
Improving user security on cloud platforms also requires the development of novel technical constraints. Since many social engineering attacks conceal malware in hyperlinks, cloud portals need to collaborate and develop a virtualized space in which all shared links are generated and deployed. This way, spoofing of hyperlinks or leading users to watering hole sites is far more difficult because the domains from which the links are generated would be more uniform and recognizable to users.
User Interfaces of Cloud Services
Yet another focus needs to be on improving user interface (UI) design. For now, the UI of file-sharing programs prioritizes convenience rather than safety. This is a bias that permeates the technology design community, and its most marked manifestation is in mobile apps where it is hard for users to assess the veracity of cloud-sharing emails and embedded hyperlinks.
To change this, UI should foster more personalization of the shared files. Users shouldn’t be permitted to share links without messages or subject lines and should be prompted to include a signature in the message. The design must also deemphasize the actions that users have to take and emphasize review, especially on mobile devices. This can be achieved by highlighting the user’s personalized message, by displaying the complete URL rather than shortening it, and by making the use of passwords for opening shared documents necessary.
UI design could also focus on integrating file-sharing portals with email services, so that links aren’t being generated from the portal directly, but are created from within email accounts that people are familiar with. This way, emails aren’t being sent from unknown virtual in-boxes, and personalization becomes easier.
The Cloud Is A Victim Of Its Own Success
Finally, our extant user training on email security is at odds with end-user cloud sharing behavior. Using the cloud today entails violating training-based knowledge, which over time, changes user perceptions of the validity of training. We must update training to emphasize safety in the sharing and receiving of cloud files. This means foster newer norms and best practices, such as using passwords and personalized messages while sharing. It also includes teaching users how to gauge whether a shared hyperlink is a spoof and the approaches to deploying such links in virtualized environments, to contain any potential damage.
The cloud is becoming a victim of its own success. With many more players entering the market, the user experience is getting more fragmented and enhancing vulnerabilities because of the different ways in which they implement each platform. Today, there are hundreds of providers offering different cloud services, with many more coming online.
The industry is slated to grow even more, because we have barely tapped the overall potential market–with anywhere from 30 to 90 percent of the all organizations in the US, Europe, and Asia, yet to adopt it. Thus, the user issues are only likely to increase as more providers and users enter the space.
Correcting this is now more important than ever. Because a single major breach can erode user trust in the entire cloud experience–forever changing the cloud usage landscape.
*A version of this post appeared here: https://blog.ipswitch.com/data-security-in-the-cloud-part-2
The adoption of public cloud computing makes user data less secure. And it’s not for the reasons most in IT realize.
In the first part of this series, I explain why; solutions follow in part 2.
Most users experience the cloud as online software and operating environments (e.g., Google’s App Engine, Chrome OS, Documents); and as online backup, storage, and file sharing systems (e.g., Dropbox, iCloud).
Adopting such services makes sense. Its providers have deeper resources, better technical talent, and more capabilities for predicting and reacting to adverse events. This lowers the probability of data loss and outages, be they because of accidental or malicious causes. Using cloud-based services reduces the in-house processing power requirements and also meets the varying data access needs that users have today. This reduces the costs of maintaining hardware, software, and support staff.
Most Companies Have Adopted the Public Cloud
Recognizing these advantages, some 91 percent of organizations worldwide have already adopted public cloud computing solutions and around 80 percent of enterprise workloads are expected to shift to cloud platforms by year’s end.
But cloud computing solutions also bring new technical challenges that can expose the enterprise to cyber-attacks. Many of these are well known in cyber security circles and have proven fixes. This includes mechanisms for auditing security vulnerabilities both at the provider end and on client machines, for assuring the availability and integrity of hosted services through encryption, and for granting and revoking access.
Outside of these, however, there are several vulnerabilities that arise from using cloud-services. These are user and usage driven issues that are ignored by most in IT who prefer to write-it off with the “people will always be a problem” adage rather than tackle them. In consequence, most of these threats are seldom researched, but they make the data hosted on the cloud even more susceptible to being hacked.
For one, using cloud-based file sharing routinizes the receipt of hyperlinks in emails. Keep in mind, hundreds of providers make-up this market space. Most organizations use at least five different cloud services and most users subscribe to an ecosystem of their own liking. These translate to numerous cloud-service generated hyperlinks that users frequently send and receive via emails and apps on different devices.
But once users get accustomed to complying with such emails, it routinizes opening hyperlinks, making them much more likely to click on malicious hyperlinks in spear phishing that mimic them.
Convenience is not Always Secure
Making things worse is the design of cloud-sharing services. In their bid to make it convenient, services such as Google Drive, Google Photos, and Dropbox, send out pre-crafted email notices of shared files.
The email notice usually contains only a few pieces of variable information: the name of the sender, the hyperlink, and some information about the file being shared through the link. The rest of the space is occupied by branding information (such as the name of the cloud provider and their logo). Thus, users have just a few pieces of information for judging the authenticity of what’s being shared.
But in many cloud services, while the email appears to come from the sender and has their name, it doesn’t come from their in-box. Instead, it comes from a different in-box, one that changes with the provider. For instance, Google Drive notifications come from a “email@example.com” inbox, Dropbox comes from a ” firstname.lastname@example.org,” while Google Photos comes from a “email@example.com,” where the alpha-numeric characters (randomly chosen for this example) change each time. No user can remember these in-boxes, so there is no way for users to know if these emails are indeed authentic. Furthermore, cyber security awareness training caution users about opening emails from strange and unknown in-boxes. Thus, every time users open a cloud-shared hyperlink, they have violated safety principles they were taught-–which erodes their belief in the validity of the other aspects of their security training, opening them up to even more online attacks.
Hyperlinks Shared Through Cloud Services
A similar issue plagues the hyperlinks shared through cloud-services. Most contain special symbols and characters, and there is no simple way for users to assess their veracity. Given how these are generated and shared, users cannot plug the hyperlink into a search engine or into a browser without deploying them. Nor can users forward privately shared links to a sandboxed device or to another person with expertise. All users can do is rely on the information in the email, which requires deploying the hyperlink.
Outside of the sent-mail and hyperlinks, the only other varying indicator in a cloud-sharing email is the extension of the shared document (such as whether it is a .DOCX or a .MOV file), which is usually accompanied by an icon showing the type of file attached (e.g., a PDF icon). These were never designed to serve as yardstick for gauging the veracity of shared files.
As my research on user cognition shows, people form several false assumptions about online risk. For instance, many people believe that PDF documents are secure because they cannot edit them, which, of course, has nothing to do with the security of the file-type. These mistaken assumptions, what I call Cyber Risk Beliefs, are not only trigged by icons and files extensions, but they also dictate how users react to them. So, seeing a PDF extension or icon–which can easily be spoofed–and believing it is secure, further increases the likelihood that users will open cloud sharing hyperlinks that may actually be spear phishing.
Finally, the display of all these pieces of information is further circumscribed on smartphone and tablets. Depending on the app and device, brand logos and other graphical information are sometimes not displayed, sender information is auto-populated from the device’s contact book, and the UI action buttons, as in “Open” Download” and “View” are made prominent. These are deliberately designed to move the user along to a decision–which almost always is to comply with the request rather than to pause or exercise conscious care.
Such design issues plague many communication apps accessed on mobile devices–something I highlighted in my 2019 Verizon DBIR write-up. But they are even more problematic in cloud-based file sharing because unlike email, which by default receivers expect to be personalized (as in have a subject line, some salutation, and always, a message), the established norms for cloud-sharing of files are exactly the opposite: users seldom expect personalization, almost never include a message, and don’t even know how to inject a subject-line. This not only makes it easier to create spoofed cloud sharing emails but users have a particularly hard time discerning them on mobile devices.
Wrapping Up Cloud Security
All these issues are usage driven and stem from the success of the cloud. This means they are unlikely to go away and the widespread adoption of the cloud–a market Gartner expects will exceed $220 billion by 2020–will only increase their scale. Given the volume of data that is increasingly stored on the cloud, the availability of so many user level vulnerabilities are fodder for social engineers looking for easy ways to hack the data.
And this is already afoot: Dropbox, Google Drive, and Adobe accounts are now among the most common lures used in spear phishing emails. In 2019, one in four breaches in organizations involved cloud-based assets, and a whopping 77% of these breaches happened because of a phishing email or web application service, that is, the attacks spoofed cloud-service emails and contained hyperlinks that led users to watering holes.
Keep in mind that these vulnerabilities exist in almost all cloud services, which means breaches because of them can occur in any of them. But, because of how users form beliefs about online risk, a breach in one would likely undermine their trust in all cloud platforms. So, resolving these issues is necessary not just for better protecting data but also for ensuring the continued adoption of the cloud.
How we do this, I discuss in the part 2.
*A version of this post appeared here: https://blog.ipswitch.com/data-security-in-the-cloud-part-1
“Why do you teach me ABC?” My precocious preschooler pointed to the virtual QWERTY keyboard on the tablet: “Why not ASD?”
As someone who studies the diffusion of innovations — how people learn and adopt new ideas and techniques — I wondered why indeed?
And not just the ABC sequence. Many preschoolers already know words like Xbox, Yahoo and Zoom than xylophone, yacht, and zebra we have them rote. Wouldn’t teaching children the words that hold more meaning to them help keep pace with their experiences?
Of course, the QWERTY sequence is itself a product of modern technology. The layout was engineered by placing commonly typed characters farther apart to reduce the chance of font-keys in early manual typewriters from jamming when stuck together. Although completely unnecessary on today’s electronic keyboards, it has resisted all attempts over the past 50 years at improving its design. Teaching the sequence would, therefore, also be practical because it is the accepted norm, appearing in every input device from ATMs to airplane flight controllers.
Many people, however, believe that the ABC sequence has remained somewhat fixed, while in actuality it has changed over time. Our 26 alphabets began sometime around the 15th century BCE in the Sinai as 22-characters, evolved with the Greeks into 25, and on through the Romans into Latin and the present set of 26. Z, which used to appear after F in Old Latin, was replaced with G, and transposed to its present placement. Here, too, technology and human development played a role. With migration and the expansion of people’s vocabulary, new inflections in speech arose, necessitation newer alphabets such as W. With the invention of writing tools and printing technologies came cursive scripts, lowercase letters, and the development of standardized font families. Thus, the ABC sequence is nothing more than a norm that people have overtime agreed upon — no different from QWERTY.
But there is an even stronger argument for teaching the newer sequence. Keyboards are tools for expression, no different from what pens are to writing or language is to literacy. And the sooner you are proficient with the tools, the better you can get at using it. Just as cultures with written languages, because of their ability to transmit knowledge with far greater accuracy, evolved to overtake cultures with spoken language, being adept as using the tools of expression sooner could lead to a higher quality of knowledge transmission. Thus, adapting to QWERTY sequence sooner would confer an evolutionary advantage for our children and likely even for all of us.
But that’s not all. Today, computing technology has also altered the way we write. Not only do we not use quills and fountain-pens, we rarely write by hand. And this has happened rather fast, even faster than the centuries it took for the evolution of alphabets and font families. Raised in the 1970s, I was taught to write in cursive, a skill which is seldom taught in US schools anymore. Instead, children in 3rd and 4th grade today “write” on computers where not just the writing style but also the process of writing is different.
Because you can only rewrite a document that many times, writing by hand, even on manual typewriters, required thinking before committing words on paper. Modern computers make writing innumerable drafts possible, which makes thinking as we write, without paying attention to style, spelling, or grammar in the initial drafts, possible. This has led to a change in how we write. As the renowned social psychologist Daryl Bem advocates in his oft-cited guide “…write the first draft as quickly as possible without agonizing over stylistic niceties.”
Newer word-processing apps have altered this process even further. While the ever-popular Microsoft Word allows for a sequential documentation of thoughts, newer apps like Textilus and Scrivener encourage non-sequential writing, allowing authors to tackle different sections, simultaneously, in draft form. Adding to this are advances in voice-to-text programs and machine-learning tools that can capture spoken words and suggest intelligent responses. Many of these, accessible at literally the flick of a wrist on many smartwatches and phones, have changed not just how we write but also our role as writers.
Finally, our idea of literacy itself is expanding. It’s more than just about knowing to write; it’s about being able to express information creatively. Children need to not only be adept at computing but also at finding information online, crafting persuasive content, and, while all of doing this, protecting their information trails. This requires two additional skills: digital literacy and cyber hygiene. The former equips them with information assessment skills, so they can find the right information and protect against disinformation. The latter instils digital safety skills, so they can’t be manipulated online and their information isn’t compromised. Both are essential for thriving in the virtual world where most of them spend their waking hours, even more so now since the pandemic.
Children are already familiar with an alphabet soup of online service before they step into a classroom. These skills are, thus, best introduced in their formative learning, not in middle school and college where they are presently taught. This will ensure that the next generation is equipped to transmit information with even greater accuracy and creativity all the more sooner — an advantage that will accrue to them and to our society as a whole. The first step towards this involves mastering the QWERTY keyboard.
- A version of this post appeared here: https://medium.com/@avishy001/why-do-we-still-teach-our-children-abc-7f8cde35ec39
- **Photo source
Unless we do something proactively, social engineering’s impact is expected to keep getting worse as people’s reliance on technology increases and as more of us are forced to work from home.
Contact tracing, superspreaders, flattening the curve — concepts that in the past were the domain of public health experts are now familiar to people the world over. These terms also help us understand another virus, one that is endemic to the virtual world: social engineering that come in the form of spear-phishing, pretexting, and fake-news campaigns.
As quickly as the coronavirus began its spread, news reports cautioned users of social engineering attacks that tout fake cures and contact-tracing apps. This was no accident. In fact, there are a number of parallels between the human transmission of COVID-19 and social engineering outbreaks:
- Just like coronavirus transmits from person to person through respiratory droplets, social engineering also passes from users through infected computing devices to other users. Because of this transmission similarity, just as infected people, by virtue of their physical proximity to many others, act as superspreaders for COVID-19, some technology users act likewise. These tend to be people with many virtual friends or those subscribing to many online services who consequently have a hard time discerning a real notification or communication from one of these personas or services from a fake one. Such users are prime targets for social engineers looking for a victim who can provide a foothold into an organization’s computing networks.
- The vast majority of people infected with this coronavirus have mild to moderate symptoms. The same is the case with most victims of social engineering because hackers usually lurk imperceptibly as they make their way through corporate networks. They often go undetected for months — on average, at least 101 days— showing no signs or symptoms.
- Just as no one has immunity from COVID-19, no one is immune against social engineering. By now everyone, all over the world, has been targeted by social engineers, and many — trained users, IT professionals, cybersecurity experts, and CEOs — have fallen victim to a spear-phishing attack.
- COIVD 19’s outcomes are worse for people who have prior health conditions and for people who are older. Similarly, the outcomes of social engineering are worse for users with poor computing habits and poor technical capabilities. Many of these tend to be senior citizens and retired individuals who lack updated operating systems, patches that protect them from infiltration, and access to managed security services.
- Finally, personal hygiene — hand washing, use of masks, social isolation — is the primary protection against coronavirus infection. Likewise, for protecting against social engineering, digital hygiene — protecting devices, keeping updated virus protections and patches, and being careful when online — is the only protection that everyone from the FBI to INTERPOL has in their arsenal.
But beyond these similarities, social engineering outbreaks are actually harder to control than coronavirus infections:
1. Social engineering infections pass through devices wirelessly, making it hard to contact-trace infection sources, isolate machines, and contain them.
2. There are well-established scientific processes that the medical community has developed to identify knowledge gaps about coronavirus. This helps researchers focus. In contrast, even the fundamentals of social engineering — such as when it’s correct to call an attack a breach or a hack — lacks clarity. It’s hard to do research in an area when there is no consensus on what the problem should be called or where it begins and ends.
3. While human hygiene is well researched, digital hygiene practices aren’t. For instance, in 2003, NIST developed password hygiene guidelines asking that all passwords contain letters and special characters and are changed every 90-days. The guideline was developed studying how computers guessed passwords, not how humans remembered them. Consequently, users the world over reused passwords, wrote them down on paper to aid their memory, or blindly entered them on phishing emails that mimicked various password-reset emails — until 2017, when these problems were recognized and the policy was reversed.
4. Evidence points to those who have recovered from coronavirus having at least short-term immunity to it. In contrast, organizations that have had at least one significant social engineering attack tend to be attacked again within the year. Because hackers learn from every attack, this suggests that the odds of being breached by social engineering actually increase with each subsequent attack.
5. Our response to COVID-19 is informed by reporting throughout the healthcare system. Unfortunately, there is no similar reporting mechanism for social engineering. For this reason, a hacker can conduct an attack in one city and replicate it in an adjoining city, all using the same malware that could have easily been defended against had someone notified others. We saw this trend play out in ransomware attacks that crippled computing systems in Louisiana’s Vernon Parish in November 2019, quickly followed by six other parishes, and continuing through the rest of the state in February 2020.
Because of these factors, the economic impact of social engineering continues to grow. There has been a 67% increase in security breaches in the past five years, and last year companies were expected to spend $110 billion globally to protect against it. This makes social engineering one of the biggest threats to the worldwide economy outside of natural disasters and pandemics.
Just as we are fighting the pandemic, we must coordinate our efforts to combat social engineering. Without it, there will be no vaccine or cure. To this end, we must develop intraorganizational reporting portals and early-warning systems to warn other organizations of breaches. We also need federal funding for basic research on the science of cybersecurity along with the development of evidence-based digital hygiene initiatives that provide best practices that take into account the user and their use cases. Finally, we must enlist social media platforms for tracing the superspreaders in their users, and develop open source awareness and training initiatives to protect them and the cyber-vulnerable from future attacks.
Unless we do something proactively, social engineering’s impact is expected to keep getting worse as people’s reliance on technology increases and as more of us are forced to work from home, away from the protected IT enclaves of organizations. We may in the end win the fight against the coronavirus, but the war against social engineering has yet to begin.
*A version of this piece appeared in Dark Reading: https://www.darkreading.com/endpoint/what-covid-19-teaches-us-about-social-engineering/a/d-id/1337979
Two out of three Americans with jobs are already working from home because of the pandemic. Many will have to continue if pandemic reoccurs. But millions are unable to and are without jobs, because of significant barriers imposed by technology, regulation, and organizational preparedness.
One technological barrier is the lack of universal high-speed Internet connectivity. People at home today run multiple devices for everything from making video calls to streaming entertainment, participating in meetings, and doing classwork. This requires fiber-based Internet access that allows gigabyte-speeds rather than older cable and telephone based connectivity.
But outside of major American cities, most of us are served by poor quality and lower speed Internet services built on outdated infrastructure. The reason is that in most market areas, legislative barriers have limited competition, keeping the cable and telephone companies as virtual monopolies that can charge higher prices whilst continuing to invest little in improving product quality. Because of this, many in rural areas, the urban poor, and consumers in many smaller urban areas either don’t have good access, cannot afford it, or have limited choice.
Another technology barriers to remote work is the outdated software and operating systems that many companies utilize, which are incompatible with what people use at home. For instance, close to 82% of medical imaging devices in US hospitals still run Windows 7 and XP-based systems. There are about 200 million computers worldwide still running such outdated systems including 30,000 machines in Germany’s local government offices and 50,000 in Ireland’s healthcare system. The reason for such practices is legacy programs, those that can only run on older operating systems, that many organizations continue to support. But, because of such systems, people whose work relies on such older programs cannot work on them remotely from their updated computing devices at home.
Yet another barrier comes from data protection laws. From HIPPA that governs electronic patient health information (ePHI) access to the European Union’s data portability laws, various regulations protect user data from cyber criminals by restricting access to them outside of secure work computers and servers. But these laws were formulated in the pre-pandemic era, where employees had the luxury of working from offices. Layered on such laws are organizational IT policies, which often impose their own restrictions on how employees can access data.
But it is because of such restrictions that Facebook’s content moderators all over the globe cannot presently work from home—which has also reduced their ability to quell misinformation and online scams from going viral. Similarly, concerns of cyber breaches have led organizations to require their employees use virtual private network (VPN) services when connecting from home. Using a VPN is hard enough for users with poor technology skills, but even for the technologically adept, it lowers Internet speeds, especially when there is a signification increase in the load on VPN servers, as is now the case. Thus, regulatory concerns cause restrictions and delays that make for a frustrating remote work experience.
The final factor limiting remote work is cyber risk from the user. While many users can be trusted with remote data access, many others cannot. This is not just because some people have lower technical skills but also because many users’ digital hygiene levels are unknown. This is a pivotal issue because regulations such as HIPAA require organizations to conduct risk assessments to address vulnerabilities from remote data access. But this is easier said than done. In an era when the opening a single phishing email could launch ransomware that could jump from home to work networks and cripple the entire organization’s systems, the risk to the enterprise is not just from the employee working at home, but from their entire family. Hence, organizations would rather limit who can work remotely than risk a devastating enterprise-wide lockout.
Making it possible for more of us to work remotely from home will require a concerted efforts from the government, educational institutions, and organizations.
The starting point to this is improving residential Internet access. The digital divide is no longer about just having Internet access, but having universal access to fiber at an affordable price. With 5G years away from being universal, we have to reimagine competition among Internet providers. This involves removing the legislative restrictions that prohibit competition among providers and, in some cases, fiber networks being developed by municipalities. A good example is Chattanooga, Tennessee, where the local government developed its own fiber network, which not only made gigabyte speed service locally available for a competitive price but also recovered the setup costs and led to a technology start-up boom in short order.
Next, organizations must plan on developing an agile workforce. Most current organizations support a fraction of their workforce’s remote work needs. For instance, the US Airforce VPN system is built to support only a quarter of its 275,000 civilian workers and contractors. Organizations can invert this by investing in virtualization to run legacy software, allowing more employees to bring their own devices (BYOD), and moving towards a cloud-based infrastructure. This will create the ability to run legacy software on remote machines while also quickly upgrading the technology being used within organizations.
The final issue is reducing cyber risk from users. Models for this already exist in the systems used for evaluating financial credit scores and giving automobile driver’s licenses, which were developed for similar reasons—to estimate risk and ensure that people meet minimal standards of performance and safety. Just as we do with driver’s licenses, we need to establish federal standards for user risk assessment that mandates cyber safety training and awards users with a personal cyber risk score. Cyber safety training must begin from K-12, when most already use computing systems, and become part of standard university curricula. Also, the risk scores should be portable between jobs, accessible to employers, and users should be capable of improving them through additional certifications provide by for-profit training companies. With everyone trained, the overall cyber risk to organizations from users will reduce as will their concerns about remote access.
Providing better Internet access, creating an agile workforce, and mandating cyber security training will help us combat not just this pandemic’s reoccurrence but also any future natural or manmade catastrophe. We have been saved from a complete economic meltdown by a technology—the Internet—that was built in anticipation of a nuclear fallout that thankfully never happened. Thanks to such forward thinking, we today have the capacity to continue working, teaching, even performing medical diagnosis online. Building capacity must likewise be done years if not decades in advance and we must prepare for a future where more people can continue working from home.
* A version of this post appeared here: https://blog.ipswitch.com/4-barriers-impeding-on-everyones-ability-to-work-from-home
Last week, New York City Mayor Bill de Blasio warned residents of a widespread Twitter and text-message circulated misinformation campaign falsely claiming that Manhattan was under quarantine.
Around this time, Attorney General Barr and U.S. Attorneys from various states were also warning residents of spear phishing emails, fake websites, local phone area code or neighbor- spoofing calls, and text messages making all manner of fake claims. Some of them were offering free COVID-19 tracking apps only to inject malware; others were spoofing websites such as Johns Hopkins University’ website and providing false information; some others were emailing, calling, and texting residents offering free iPhones, groceries, treatments, cures—and whatever else—preying on our collective anxieties during this pandemic.
There is actually a common thread that connects all these attacks. They are all part of what I call the Internet’s Dark Triad: hacking, misinformation, and trolling– three types of attacks that usually feed off each other and, when working in concert, are especially potent.
We saw the triad at work together during our last presidential election when the Russians hacked into the DNC, used the stolen data to seed misinformation websites, and organized a trolling campaign to reframe, retweet, and relentlessly disseminate the information throughout the US.
While it is easy to think of each of these types in isolation, thinking of them as parts of a whole makes it easier to appreciate their impact–and more importantly, deal with them.
For one, the triad feeds on fake profiles on social media, neighbor phone numbers, and email addresses. For instance, in Erie County, NY, someone impersonated a local TV station and tweeted fake news about the virus. Such attacks are very easy to foment, given the easy access everyone has to social media and VoIP services.
We have left the responsibility of curating content and profiles to individual media organizations, almost all of who have resorted to internal processes. Their process involves some automation but, given the nuanced and equivocal nature of content, they largely rely on human curators, who they have employed by the thousands.
But even during normal circumstances, as in the days before the pandemic reached our shores, the process was found lacking. Now, many content curators are at home and most aren’t even allowed to do their work because of the offensive, sensitive, and graphic nature of content they deal with. This means at the time when social media matters the most for users, its content is most vulnerable to misuse.
Instead of leaving this problem in the hands of individual social media organizations who are all creating organizational silos of vital information, we need these organizations to come together and coordinate their efforts. Media organizations should create a centralized data repository in which they pool their profile and content data. This database should be accessible to researchers and other media organizations, especially the regional and local media house that don’t have the depth in technical skills or manpower to keep a track of ongoing attacks. Having a centralized repository of profiles and phones being spoofed would allow us to identify attacks before they become widespread and to inform local agencies and residents.
Two, in his press release, Attorney General Barr asked Americans to report COVID-19 related cyber-attacks to the National Center for Disaster Fraud (by calling 1-866-720-5721 or by e-mailing firstname.lastname@example.org). But there are already several other federal and local agencies collecting similar reports. This includes the FTC and the purpose-built reporting portal of the FBI’s IC3, among many others.
Having users report on various portals needlessly duplicates efforts, not to mention wastes resources and confuses users. These efforts also need to be unified. Just as social media profiles and phone numbers are reused, so are spear phishing email accounts, their persuasive ploys, and the malware they carry. Centrally collecting reports and developing a consumer-focused information portal allows us to track attacks, identify the ones that are most virulent, and provide support to users—all of who are working from home networks, without the benefit of professional IT support.
Finally, at a time of anxiety, people turn to others for information and social support. It is, therefore, our responsibility to ensure that we don’t forward along false information—and give the oxygen necessary for the Dark Triad to function. It is important that we become vigilant about the information we encounter on our media feeds. We need to check the sources of information we receive, search online for other corroborating information, report malicious activity we encounter, and become responsible content curators for others in our sphere of influence.
We will, with our collective efforts, overcome this virus. For now, it’s our individual responsibility to ensure that neither the virus nor the Dark Triad succeeds.
*A version of this post appeared here: https://blog.ipswitch.com/how-hacking-trolling-and-misinformation-impacts-cybersecurity
“Users should use a range of letters, numbers, and special characters on their passwords and change it every 90 days.” If you are in IT, you have likely implemented this security policy. And if you are a user, you have likely endured it.
The source of this best practice suggestion is a Burr, Dodson, and Polk (2004)[i] NIST publication, which Microsoft and others widely publicized and implemented[ii]. Only, this practice has many critical flaws: it forces users to come-up with difficult passwords, often, so they end up reusing passwords across services; and it makes password reset emails common—so when a phishing email comes in asking to reset a password, users are far more likely to comply. Recognizing this, NIST reversed the policy in 2017, but by then, IT managers all over the world had blindly followed the best practice for more than a decade.
Cyber hygiene practice suggestions such as this, however, do not end here. There are many more. At the broad end are suggestions such as “develop a process for software installation for end users” or the ever relevant “educate your users on good cyber behavior.” While at the specific end are ideas such as “always use a VPN when connecting to networks,” “always look for SSL (lock) icons on webpages,” “always look for source headers in emails to find out who is sending you an email,” “always use a password vault,” “always use a good virus protection programs,” and “always apply patches and keep your system software updated.” All follow a familiar pattern albeit with varying levels of specificity: they expect the user to blindly perform an action, all the time, when online.
But are these blanket suggestions really appropriate? Are they even effective, let alone necessary to do in all cases, across all organizations, by every Internet user around the world?
Answering such questions might appear unnecessary, but there is a cost involved in asking computer users to check various parts of an email’s header for each email they receive, to use a VPN, or to manage their passwords in vaults. The costs are not just in their time but also in the technical IT resources that go into supporting such practices, not to mention the aforementioned issues of users becoming habituated in flawed practices, which could increase their vulnerability to cyber compromise.
Whenever such criticisms are raised, cyber security experts resort to conceptual analogies, drawing parallels between cyber hygiene best practices and personal hygiene, to justify their suggestions. The usual argument is along the lines of “just like washing hands, brushing teeth, or regularly taking multivitamins,” “users should do this…” and besides “just like personal hygiene, there is also no real harm in following cyber hygiene best practice guidelines.”
But if we have learnt anything from research on public health, it is that not all suggestions are good. This is the lesson from the widespread intake of multi-vitamin pills as well. While most people believe vitamins are necessary or that there is no harm in taking them, medical research disagrees. After reviewing multiple large-scale tracking studies, the medical community concluded that vitamins have little to no effect whatsoever on reducing heart disease, cancer, cognitive decline, or memory loss. In fact, some, such as vitamin E and beta-carotene supplements, are downright harmful and reduce life expectancy instead of improving life.[iii]
Of course, there are exceptional times where vitamins are good or even necessary. Certain people—pregnant women, people living in certain regions, people suffering certain health ailments—might need a course of vitamins.[iv] These conclusions are supported by research and are based on a case-by-case assessment of the person’s needs.
The same is true for cyber hygiene best practices. Not all work, but some do. But what works, and the specific instances—organizational type, use environments, use cases, and user types—need to be ascertained. These need to be empirically determined and evaluated for their need and contextual adequacy. Doing so is far better than blindly implementing hygiene practices on the advice of sundry sources, without assessing their applicability, only to realize years later that it was not only a wasted effort but that it also made the organization more vulnerable to cyber-attacks.
The paper presents a better approach. It begins by examining the basic concept of cyber hygiene, a term that is widely used but poorly understood or conceptualized. Next, the paper tracks the roots of the concept of cyber hygiene and discusses the pitfalls of comparing it to personal hygiene. Following this, the paper presents a recently developed measurement tool called the Cyber Hygiene Inventory (CHI) and discusses how it can serve as a framework for developing need based cyber hygiene practices.
What is cyber hygiene?
In early 2015, in the aftermath of the Sony Pictures Entertainment hack while writing a media article on how we can prevent cyber breaches, I was searching for a term that captured what online users could do to better protect organizations from such attacks. My search led me to a 2013 Wilson Center speech by then Homeland Security Secretary Janet Napolitano who had used the term “cyber hygiene” in the context of cyber habits. [v] I thought the term was perfect because it helped drive home the message that protecting the Internet was every user’s personal responsibility. I used the term in my article[vi] and in many others, with one local newscaster during an interview even commenting on the term’s simplicity and catchiness.
Thanks to its appeal, today the term is so common that a keyword search on Google returns over 33 million pages with the phrase cyber hygiene. It has appeared in public policy documents, military doctrines, congressional testimonies, media articles, research papers, and websites. All subscribe to some definition of what cyber hygiene entails and espouse all manner of best practice guidelines. Some of these guidelines target adolescents, others are for employees, some others focus on IT professionals, and still others on vulnerable populations.
But while there are many suggestions on what constitutes cyber hygiene, there is little clarity on what it does or does not entail and who it should be performed by. This is a problem across the globe. In comparing cyber hygiene practices across member nations, the European Union Agency for Network and Information Security (ENISA) found that there was no single standard or commonly agreed upon approach to it. The report also concluded that cyber hygiene should be viewed in the same manner as personal hygiene in order to ensure the organization’s health was in optimum condition. (ENISA December 2016). (https://www.enisa.europa.eu/publications/cyber-hygiene/at_download/fullReport). Thus, there is no clarity on what cyber hygiene means or entails other than the view that it is something akin to personal hygiene. But while it’s unarguable that cyber hygiene is important, is it really appropriate to think of it in terms of personal hygiene?
Is Cyber Hygiene Analogous to Personal Hygiene?
The metaphorical construction of cyber hygiene as similar to personal hygiene does not stop at its definition. It even influences how cyber safety solutions are framed. For instance, many cyber security websites use examples of hand washing and multivitamin used to drive home cyber safety suggestions, such as applying virus updates and patches. Some sites go even further. One in particular, “Cyber Security is Cyber Health”[vii] equates poor heredity in people to the use of obsolete software; the lack of vaccinations to the lack of technical safeguards; and promiscuous sex with visits to unreliable websites. It makes similar conceptual leaps linking pregnancy, fetal ultrasound, newborns, even psychological health, with some sundry facet of cyber hygiene.
Thinking in this manner adversely influences the solutions we develop. Take the case of airplane technology. Since antiquity our mental models of flying were based on avian flight because the flying capabilities of birds were visible and self-evident. From the ancient Greek fables of Daedalus and Icarus mythologizing the use of bird-like wings for human flight to 20th century attempts at fabricating aircraft’s wings that flapped, this analogous thinking stymied the development of aircraft technology for over two millennia. Figure 1 is the 1857 patent drawing of pioneering aviator Jean Marie Le Bris’s failed Artificial Albatross.[viii] It shows how the avian model proved to be a proverbial albatross in aircraft design. Thus, the analogies we use for thinking about cyber hygiene matter.
Figure 1. Patent drawing of pioneering aviator Jean Marie Le Bris’s Artificial Albatross
There is another reason for unbridling cyber hygiene from our mental models of personal hygiene. Personal hygiene does not have a downside. Washing hands or brushing teeth, unless you do it at an obsessive level, does not cause problems to people. But using a certain app or an operating system thinking it is protective could enhance risk, especially if we trust such systems. For instance, telling people to believe that “an SSL website is secure” is just bad policy not only because many fraudulent websites also have legitimate SSL certificates but also because users conflate security with safety, wrongly thinking secure sites are authentic sites.[ix] Making such wrongful thinking even more problematic is the fact that more and more phishing websites—two out of three according to a recent Anti Phishing Working Group (APWG) report—have SSL certificates.[x] Users need not compulsively enact behaviors based on such flawed beliefs. All it takes is for them to enter their credentials on what they purport is an encrypted page on one of these phishing websites for a breach to occur.
The same problem plagues us if we place too much credence in a solution, again, something we do not really think about in our physical hygiene. Believing that a virus protection solution is protective or that all its updates that appear as notifications are necessary, making users blindly apply patches. Unfortunately, many social engineering attacks mimic software and virus protection updates, which users wittingly download and apply because they have been conditioned to behave as such. In this way, cyber hygiene practices can make users more rather than less vulnerable.
But there is yet another important difference between the personal and cyber realms stemming from what they protect. Personal hygiene protects the human body from chance infections through routine preventative actions. The human body is, however, already resilient. Even without many modern hygiene solutions such as hand soap, humans can ward off many threats. The central reason for this is defenses against most germs and viruses we have evolved over millennia. Our sensory organs have evolved follicles, hair, nails, eyelashes, cilia, and mucous membranes that trap most intrusion. Our internal organs likewise have also evolved complex immune responses that work independently of our need to manage or control it. These internal and external defenses work in tandem and independently when needed and are further protected by the human brain (such as when someone impulsively swats a stinging bug). Thanks to these complex systems, most of us can live relatively long disease-free lives with minimal need for modern medicine.
In contrast, while technology is collectively capable of highly sophisticated computational tasks, its core components are dumb circuits that built without any effective protection and often flawed at their very core. Take computer processing chips and memory cells, the computer’s internal organs, for example. Last year, the identification of Meltdown and Spectre vulnerabilities demonstrated that nearly every computer chip manufactured in the past two decades have critical flaws in their algorithmic structures, rendering them vulnerable to various exploits. Similarly, dynamic memory cells or D-RAMs are also vulnerable to leaking their electrical charges as they interact—called the rowhammer effect[xi]—which can be exploited in a D-RAM attack to get root access to systems.
The same is the case for the “sensory organs” of computing devices: touchpads, microphones, cameras, and input devices. Each is easily corruptible using simple keyloggers and other programs. Layered on these are many apps, all using different schemes and privileges that interface with the system’s internal organs. Some of these apps are programmed poorly, others are rouge programs built to affect compromises by co-opting their privileges, while still others can be manipulated by rouge programmers using malware that can infect everything from the sensory organs of the computer all to way to its internals. Finally, we have users with varying skills who utilize these systems and programs on them in a multitude of ways.
Making things particularly different, a single computing attack can cripple multiple layers of computing without needing to evolve a compromise for each layer. As a case in point, a single phishing email with a malware payload can trick users, circumvent many end-point security protections, and enter the core of a system and gain a foothold. In contrast, even influenza, one of the most lethal and persistent biological viruses, which kills over 600,000 people globally each year, requires a complex series of interactions. Over two-thirds of deaths from it are because of indirect causes such as organ failure.[xii]
Thanks to all this, human hygiene practices can accommodate a wide amount of variance in outcomes. In contrast, errors in individual cyber hygiene practices can have a geometric increase in overall risk because the system risks exponentially heighten at every iteration. For instance, a 10 percent failure rate in hand-washing rates does little to increase infection from most diseases. In contrast, a 10 percent failure rate in SSL certificates could lead to enhanced risk by itself. If these certificates are used in email-based phishing attacks with a 10 percent relevance rate (users for whom the content is relevant), on an email network that allowed 10 percent of these emails through, with just 10 percent of the users clicking and enabling the malware, the probability of a breach goes up to 34 percent. These are conservative probabilities because in actuality 30 to 70 percent of phishing emails are usually opened (Caputo et al. 2013)[xiii] and there are many rouge SSL certificates and pages on the Internet.[xiv] Thus, each potential failure magnifies the overall failure rate, something which seldom occurs in human beings because of the way evolution has helped us defend ourselves.
What is clear from this is that hygiene in health and cyber hygiene are not analogous. Differences stem from the nature of computing, online threats, and users—all of which cumulatively increase the risk of a breach. Because of this, we cannot afford the same leeway with cyber hygiene that we can with personal hygiene. We need greater precision in how we define cyber hygiene and identify policies.
So what is user cyber hygiene?
Until recently, there have been few academic attempts at defining cyber hygiene. By comparing various definitions, through interviews with IT personnel, CSOs, CIOs, and using a quantitative scale development approach, Vishwanath et al. (2019) developed a conceptual definition and a multi-item inventory for measuring cyber hygiene. They define cyber hygiene as the cyber security practices that online consumers should engage in to protect the safety and integrity of their personal information on their Internet enabled devices from being compromised in a cyber-attack (Vishwanath et al. 2019).[xv]
At the operational or measurement end, user cyber hygiene comes from the confluence of four user-centric factors: awareness, knowledge, technical capacity, and the enactment of cyber security practices. Awareness and knowledge make up the cognitive factors of familiarity and understanding. Technical capacity pertains to the availability of technologies where necessary. Finally, enactment makes up the behavioral dimension and is the utilization factor. Effective user cyber hygiene occurs at the confluence of these four factors: when users, aware of what needs to be done, are knowledgeable about it, have the required technologies and know-how to achieve it, and enact it as and when necessary.
Vishwanath et al. (2019) also developed a framework that can be applied across multiple organizational and user environments. It is organized using a multi-dimensional inventory called the Cyber Hygiene Inventory (CHI).[xvi] The CHI comprises 20-items or questions that tap into five dimensions of user cyber hygiene. These dimensions are organized using the acronym SAFETY, where S pertains to Storage and Device hygiene, A signifies Authentication and Credential hygiene, F signifies Facebook and Social Media hygiene, E pertains to Email and Messaging, T for Transmission hygiene, and Y stands for “you” signifying the users responsibility in ensuring cyber hygiene. Each item or question in the inventory measures a best practice or a cyber safety related thought or action. While the inventory has a finite set of 20-items, it allows for the addition of questions that are often necessary to capture contextual or organization-specific practices.
Before delving into the details of the inventory, some facets of the inventory need highlighting. First, the framework provided by CHI is broad and technology agnostic. This has two advantages: it allows the CHI to be applied across any organization, user group, and even residents of an area. Second, having a broad inventory makes it possible to use it across platforms, technologies, applications, over different points of time even as different platforms and functionalities evolve. Third, we can measure most questions in the CHI using standard survey approaches. Fourth, the CHI accommodates subjective and objective measurements. While knowledge, capacity, and behavioral intent, can all be measured subjectively, we can also measure them using objective measures using a knowledge test, by taking an inventory of technologies available in the organization, and by measuring actual behavior observationally. Using a combination of measurement approaches has the added advantage of eliminating confounds such as method bias from influencing the results. Finally, the CHI includes measures of cognitive and behavioral factors. This is superior to extant approaches such as using pen-testing data or training data, which only capture behavior. Thus, the CHI captures information about user related to their cyber hygiene with more granularity, accounts for more user-level influences, and allows for more valid measurement of users’ cyber safety related thoughts and behaviors.
In the ideal case, the CHI can be used to evaluate all 4 aspects of cyber hygiene—awareness, knowledge, capacity, and enactment—using a 0-5 scale. This gives each dimension a range of responses from 0 to 100, making it possible to derive a cumulative score that is easily interpretable and comparable across the inventory’s implementations. Thus, at a minimum, the score can compare awareness against knowledge, know-how, and intent among users within an organization. Using the score comes with all the usual caveats: the score is inherently ordinal but being treated as a ratio; technical know-how is contingent on IT supplying them; the responses on some enactment frequency questions are limited by the technology, application, and platform. Most of these are familiar to anyone trained in empirical social science research, and can be handled through design and analysis.
Thus, the CHI provides a baseline for IT managers not just for understanding users but also for strategic decisions. Often, IT managers hoping to implement various hygiene solutions need to determine their relative impacts and merits. In such instances, the CHI can help ascertain the strategic merits of the intervention and the values different technological solutions they plan to implement. Figure 2 provides an exemplar where 20 items were added to the CHI’s 20 and the overall 40 items were scored on 2-dimensions: their utility or security impact and the perceive ease of using the technology, two fundamental dimensions that information systems models such as the Technology Acceptance Model (Davis et al., 1989)[xvii] have shown to predict the adoption and use of technology within organizations.
Responses by a sample of IT managers within an organization were used to develop the two-dimensional map in the figure. The map presents the overall data in four dimensions, arrayed based on the utility and ease of use of each hygiene practice. The four quadrants in the map are High security significance/utility, Low enactment difficulty; Low security significance/utility, Low enactment difficulty; Low security significance/utility, High enactment difficulty; and High security significance/utility, High enactment difficulty. Based on the map, IT managers can not only quantify the perceptual importance of each cyber hygiene practice and the technology that is most closely associated with it, but also understand the relative effort in terms of resources and expected outcomes from each of them. They can thus, using this approach, strategically choose the cyber hygiene practice and technology they plan to implement.
Figure 2. Sample application of the CHI framework to make strategic decisions on organizational cyber hygiene priorities
The CHI can also be used to track the success of individual interventions and improvements in desired levels of cyber hygiene overtime. For this, IT managers can implement the CHI to compare different facets of cyber hygiene—e.g., comparing awareness with utilization at different points in time, such as before and after an intervention; or on different groups, e.g., different divisions of the same organization; or different locations, e.g., one branch of an organization serves as the control while another one in a different location serves as the target. The analysis can focus on charting the individual differences between groups and use the deviation scores or GAPs as a metric of hygiene performance. Figure 3 and 4 provide examples of such implementations. Figure 3 charts data from a single organization’s users on their relative levels of awareness, knowledge, and technical capacity across the five SAFETY dimensions of cyber hygiene. Figure 4 tracks the relative impact of training levels on cyber hygiene across users in an organization where the CHI was implemented a month before and after training.
Figure 3. Application of the CHI to assess relative gaps in perceived awareness, knowledge, and capacity
Figure 4. Application of the CHI to assess training effects in an organization.
Advantages of the CHI approach
There is no single metric for cyber hygiene, nor is there any method that can achieve any of what the CHI delivers. The extant approaches to defining cyber hygiene and creating best practices — if organizations even engage in them—remains ad hoc, with most organizations adopting practice suggestions from industry groups and other sources. The CHI serves as a baseline for understanding and developing cyber hygiene practices within organizations. It also helps evaluate, develop, assess, track, and quantify cyber hygiene and ensure improvements over time.
The same is the case with the measurement of hygiene. Most organizations do not even measure user cyber hygiene; others use proprietary approaches with underlying algorithms that remain unknown and difficult for others to use or assess. This is the case with the U.S. Department of Homeland Security’s Continuous Diagnostic and Monitoring (CDM) program, which gives participating federal government organizations a cyber risk and hygiene score card. Their reason for the lack of transparency in the program’s scoring method is that it would end up in the hands of hackers.
That said, it is safe to say that at the user end, the only metrics that exist come from training and pen-testing. Both approaches, while appropriate, are wholly inadequate. Most use behavioral measures and fail to account for user cognition—wrongly presuming that user behavior is wholly premised on a priori thought. They also have unknown amounts of noise in the data stemming from the variance in the pen-test approaches to the specifics of the tests, its frequency, its reach, and its timing. This makes it impossible to use these metrics to compare different organizations, let alone rely on them to make judgments about an individual organization’s level of cyber readiness.
In contrast, the CHI provides a transparent approach, where organizations can use and even share their scores across the 20-items without fearing that it would expose the organization’s weaknesses to hackers. They could maintain internal records of additional items—such as specific technological safeguards and other practices—that the organization wishes to not reveal. The quantitative metric can also be used to establish a benchmark that could be improved upon when more data is shared across a sector. With more data from across the industry, industry benchmarks could be established overtime, providing a more robust standard for an organization in a sector. Thus, the CHI provides an empirically driven, widely applicable, transparent, quantitative approach for formulating, benchmarking, and tracking user cyber hygiene within organizations.
The paper discussed why drawing parallels between personal hygiene and cyber is inappropriate, which might stymie the development of solutions and even increase overall user cyber risk. The paper then offered a different methodology and a mechanism for deriving cyber hygiene practice suggestions, one that is not prescriptive but instead empirically calibrated and contextually relevant. While the phrase cyber hygiene appears to have become part of the cybersecurity lexicon, we can still change how we conceptualize it. In the long run, security experts might even consider moving away from the term, replacing it with others such as Operational Security or “OPSEC,”, an area of practice developed by the US military which is more applicable in the security domain and can be applied with resorting to anolgoical leaps. OPSEC begins with the assumption that we are in an adversarial situation—a fact that is true in the domain of cybersecurity—and focuses on prioritizing information and developing approaches to ensure that those pieces of information stay protected. This shifts the focus away from global actions and their analogues in public health to tactical approaches that are grounded in adversarial defense. By re-conceptualizing how we think about cyber security, we can move away from broad practices to specific actions, and from dictating cyber hygiene practices to focus instead on protecting critical information—because after all, that is what the hackers are really after.
 The dependent probability is computed as (1-.90^k), where k is the number of layers of vulnerability.
[i] Burr, W. E., Dodson, D. F., & Polk, W. T. (2004). Electronic authentication guideline (NIST Special Publication 800-63 Version 1.0). Gaithersburg: National Institute of Standards and Technology.
[ii] Microsoft. (2016, August 31). Best practices for enforcing password policies. Retrieved from https://docs.microsoft.com/en-us/previous-versions/technet-magazine/ff741764(v=msdn.10)?redirectedfrom=MSDN
[iii] Is there really any benefit to multivitamins? (n.d.). Johns Hopkins Medicine. Retrieved from https://www.hopkinsmedicine.org/health/wellness-and-prevention/is-there-really-any-benefit-to-multivitamins
Goodman, B. (2014, February 24). Healthy adults shouldn’t take vitamin E, Beta Carotene: Expert panel. MedicineNet. Retrieved from https://www.medicinenet.com/script/main/art.asp?articlekey=176905
[iv] Scholl, T. O., & Johnson, W. G. (2000). Folic acid: Influence on the outcome of pregnancy. The American Journal of Clinical Nutrition, 71(5), 1295S-1303S.
[v] Spiering, C. (2013, January 24). Janet Napolitano: Internet users need to practice good ‘cyber-hygiene’. Washington Examiner. Retrieved from https://www.washingtonexaminer.com/janet-napolitano-internet-users-need-to-practice-good-cyber-hygiene
[vi] Vishwanath, A. (2015, February 24). Before decrying the latest cyberbreach, consider your own cyberhygiene. The Conversation. Retrieved from https://theconversation.com/before-decrying-the-latest-cyberbreach-consider-your-own-cyberhygiene-37834
[viii] Early flying machines. (n.d.). Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Early_flying_machines
[ix] Hassold, C. (2017, November 2). Have we conditioned web users to be phished? PhishLabs. Retrieved from https://info.phishlabs.com/blog/have-we-conditioned-web-users-to-be-phished
[x] Anti-Phishing Working Group. (2019). Phishing activity trends report: 3rd Quarter 2019 [PDF document]. Retrieved from https://docs.apwg.org/reports/apwg_trends_report_q3_2019.pdf
[xi] Kim, Y., Daly, R., Kim, J., Fallin, C., Lee, J. H., Lee, D., Wilkerson, C., Lai, K., & Mutlu, O. (2016). Rowhammer: Reliability analysis and security implications. ArXiV, arXiv:1603.00747.
[xii] Jabr, F. (2017, December 18). How does the flu actually kill people? Scientific American. Retrieved from https://www.scientificamerican.com/article/how-does-the-flu-actually-kill-people/
[xiii] Caputo, D. D., Pfleeger, S. L., Freeman, J. D., & Johnson, M. E. (2013). Going spear phishing: Exploring embedded training and awareness. IEEE Security & Privacy, 12(1), 28-38.
[xiv] Vishwanath, A. (2018, September 1). Spear phishing has become even more dangerous. CNN. Retrieved from https://www.cnn.com/2018/09/01/opinions/spear-phishing-has-become-even-more-dangerous-opinion-vishwanath/index.html
[xv] Vishwanath, A., Neo, L. S., Goh, P., Lee, S., Khader, M., Ong, G., & Chin, J. (2020). Cyber hygiene: The concept, its measure, and its initial tests. Decision Support Systems, 128, 113160.
[xvi] Vishwanath, A., Neo, L. S., Goh, P., Lee, S., Khader, M., Ong, G., & Chin, J. (2020). Cyber hygiene: The concept, its measure, and its initial tests. Decision Support Systems, 128, 113160.
[xvii] Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8), 982-1003.
December 2019, (c) Arun Vishwanath, PhD, MBA; Email: email@example.com
Keywords: cyber hygiene, science of cyber security, human factors, OPSEC
- May 2021
- October 2020
- September 2020
- June 2020
- May 2020
- February 2020
- January 2020
- October 2019
- July 2019
- May 2019
- September 2018
- August 2018
- March 2018
- February 2018
- November 2017
- September 2017
- June 2017
- January 2017
- November 2016
- September 2016
- May 2016
- April 2016
- March 2016
- February 2016
- June 2015
- February 2015
- January 2015
- December 2014