Data Security in the Cloud: Part 1 [Published in iPswitch]

The adoption of public cloud computing makes user data less secure. And it’s not for the reasons most in IT realize.

In the first part of this series, I explain why; solutions follow in part 2.

Most users experience the cloud as online software and operating environments (e.g., Google’s App Engine, Chrome OS, Documents); and as online backup, storage, and file sharing systems (e.g., Dropbox, iCloud).

Adopting such services makes sense. Its providers have deeper resources, better technical talent, and more capabilities for predicting and reacting to adverse events. This lowers the probability of data loss and outages, be they because of accidental or malicious causes. Using cloud-based services reduces the in-house processing power requirements and also meets the varying data access needs that users have today. This reduces the costs of maintaining hardware, software, and support staff.

Most Companies Have Adopted the Public Cloud

Recognizing these advantages, some 91 percent of organizations worldwide have already adopted public cloud computing solutions and around 80 percent of enterprise workloads are expected to shift to cloud platforms by year’s end.

But cloud computing solutions also bring new technical challenges that can expose the enterprise to cyber-attacks. Many of these are well known in cyber security circles and have proven fixes. This includes mechanisms for auditing security vulnerabilities both at the provider end and on client machines, for assuring the availability and integrity of hosted services through encryption, and for granting and revoking access.

Outside of these, however, there are several vulnerabilities that arise from using cloud-services. These are user and usage driven issues that are ignored by most in IT who prefer to write-it off with the “people will always be a problem” adage rather than tackle them. In consequence, most of these threats are seldom researched, but they make the data hosted on the cloud even more susceptible to being hacked.

For one, using cloud-based file sharing routinizes the receipt of hyperlinks in emails. Keep in mind, hundreds of providers make-up this market space. Most organizations use at least five different cloud services and most users subscribe to an ecosystem of their own liking. These translate to numerous cloud-service generated hyperlinks that users frequently send and receive via emails and apps on different devices.

But once users get accustomed to complying with such emails, it routinizes opening hyperlinks, making them much more likely to click on malicious hyperlinks in spear phishing that mimic them.

Convenience is not Always Secure

Making things worse is the design of cloud-sharing services. In their bid to make it convenient, services such as Google Drive, Google Photos, and Dropbox, send out pre-crafted email notices of shared files.

The email notice usually contains only a few pieces of variable information: the name of the sender, the hyperlink, and some information about the file being shared through the link. The rest of the space is occupied by branding information (such as the name of the cloud provider and their logo). Thus, users have just a few pieces of information for judging the authenticity of what’s being shared.

But in many cloud services, while the email appears to come from the sender and has their name, it doesn’t come from their in-box. Instead, it comes from a different in-box, one that changes with the provider. For instance, Google Drive notifications come from a “” inbox, Dropbox comes from a ”,” while Google Photos comes from a “,” where the alpha-numeric characters (randomly chosen for this example) change each time. No user can remember these in-boxes, so there is no way for users to know if these emails are indeed authentic. Furthermore, cyber security awareness training caution users about opening emails from strange and unknown in-boxes. Thus, every time users open a cloud-shared hyperlink, they have violated safety principles they were taught-–which erodes their belief in the validity of the other aspects of their security training, opening them up to even more online attacks.

Hyperlinks Shared Through Cloud Services

A similar issue plagues the hyperlinks shared through cloud-services. Most contain special symbols and characters, and there is no simple way for users to assess their veracity. Given how these are generated and shared, users cannot plug the hyperlink into a search engine or into a browser without deploying them. Nor can users forward privately shared links to a sandboxed device or to another person with expertise. All users can do is rely on the information in the email, which requires deploying the hyperlink.

Outside of the sent-mail and hyperlinks, the only other varying indicator in a cloud-sharing email is the extension of the shared document (such as whether it is a .DOCX or a .MOV file), which is usually accompanied by an icon showing the type of file attached (e.g., a PDF icon). These were never designed to serve as yardstick for gauging the veracity of shared files.

As my research on user cognition shows, people form several false assumptions about online risk. For instance, many people believe that PDF documents are secure because they cannot edit them, which, of course, has nothing to do with the security of the file-type. These mistaken assumptions, what I call Cyber Risk Beliefs, are not only trigged by icons and files extensions, but they also dictate how users react to them. So, seeing a PDF extension or icon–which can easily be spoofed–and believing it is secure, further increases the likelihood that users will open cloud sharing hyperlinks that may actually be spear phishing.

Finally, the display of all these pieces of information is further circumscribed on smartphone and tablets. Depending on the app and device, brand logos and other graphical information are sometimes not displayed, sender information is auto-populated from the device’s contact book, and the UI action buttons, as in “Open” Download” and “View” are made prominent. These are deliberately designed to move the user along to a decision–which almost always is to comply with the request rather than to pause or exercise conscious care.

Such design issues plague many communication apps accessed on mobile devices–something I highlighted in my 2019 Verizon DBIR write-up. But they are even more problematic in cloud-based file sharing because unlike email, which by default receivers expect to be personalized (as in have a subject line, some salutation, and always, a message), the established norms for cloud-sharing of files are exactly the opposite: users seldom expect personalization, almost never include a message, and don’t even know how to inject a subject-line. This not only makes it easier to create spoofed cloud sharing emails but users have a particularly hard time discerning them on mobile devices.

Wrapping Up Cloud Security

All these issues are usage driven and stem from the success of the cloud. This means they are unlikely to go away and the widespread adoption of the cloud–a market Gartner expects will exceed $220 billion by 2020–will only increase their scale.  Given the volume of data that is increasingly stored on the cloud, the availability of so many user level vulnerabilities are fodder for social engineers looking for easy ways to hack the data.

And this is already afoot: Dropbox, Google Drive, and Adobe accounts are now among the most common lures used in spear phishing emails. In 2019, one in four breaches in organizations involved cloud-based assets, and a whopping 77% of these breaches happened because of a phishing email or web application service, that is, the attacks spoofed cloud-service emails and contained hyperlinks that led users to watering holes.

Keep in mind that these vulnerabilities exist in almost all cloud services, which means breaches because of them can occur in any of them. But, because of how users form beliefs about online risk, a breach in one would likely undermine their trust in all cloud platforms. So, resolving these issues is necessary not just for better protecting data but also for ensuring the continued adoption of the cloud.

How we do this, I discuss in the part 2.

*A version of this post appeared here:

Why do we still teach our children ABC? [Published in Medium]

“Why do you teach me ABC?” My precocious preschooler pointed to the virtual QWERTY keyboard on the tablet: “Why not ASD?”

As someone who studies the diffusion of innovations — how people learn and adopt new ideas and techniques — I wondered why indeed?

And not just the ABC sequence. Many preschoolers already know words like Xbox, Yahoo and Zoom than xylophone, yacht, and zebra we have them rote. Wouldn’t teaching children the words that hold more meaning to them help keep pace with their experiences?

Of course, the QWERTY sequence is itself a product of modern technology. The layout was engineered by placing commonly typed characters farther apart to reduce the chance of font-keys in early manual typewriters from jamming when stuck together. Although completely unnecessary on today’s electronic keyboards, it has resisted all attempts over the past 50 years at improving its design. Teaching the sequence would, therefore, also be practical because it is the accepted norm, appearing in every input device from ATMs to airplane flight controllers.

Many people, however, believe that the ABC sequence has remained somewhat fixed, while in actuality it has changed over time. Our 26 alphabets began sometime around the 15th century BCE in the Sinai as 22-characters, evolved with the Greeks into 25, and on through the Romans into Latin and the present set of 26. Z, which used to appear after F in Old Latin, was replaced with G, and transposed to its present placement. Here, too, technology and human development played a role. With migration and the expansion of people’s vocabulary, new inflections in speech arose, necessitation newer alphabets such as W. With the invention of writing tools and printing technologies came cursive scripts, lowercase letters, and the development of standardized font families. Thus, the ABC sequence is nothing more than a norm that people have overtime agreed upon — no different from QWERTY.

But there is an even stronger argument for teaching the newer sequence. Keyboards are tools for expression, no different from what pens are to writing or language is to literacy. And the sooner you are proficient with the tools, the better you can get at using it. Just as cultures with written languages, because of their ability to transmit knowledge with far greater accuracy, evolved to overtake cultures with spoken language, being adept as using the tools of expression sooner could lead to a higher quality of knowledge transmission. Thus, adapting to QWERTY sequence sooner would confer an evolutionary advantage for our children and likely even for all of us.

But that’s not all. Today, computing technology has also altered the way we write. Not only do we not use quills and fountain-pens, we rarely write by hand. And this has happened rather fast, even faster than the centuries it took for the evolution of alphabets and font families. Raised in the 1970s, I was taught to write in cursive, a skill which is seldom taught in US schools anymore. Instead, children in 3rd and 4th grade today “write” on computers where not just the writing style but also the process of writing is different.

Because you can only rewrite a document that many times, writing by hand, even on manual typewriters, required thinking before committing words on paper. Modern computers make writing innumerable drafts possible, which makes thinking as we write, without paying attention to style, spelling, or grammar in the initial drafts, possible. This has led to a change in how we write. As the renowned social psychologist Daryl Bem advocates in his oft-cited guide “…write the first draft as quickly as possible without agonizing over stylistic niceties.

Newer word-processing apps have altered this process even further. While the ever-popular Microsoft Word allows for a sequential documentation of thoughts, newer apps like Textilus and Scrivener encourage non-sequential writing, allowing authors to tackle different sections, simultaneously, in draft form. Adding to this are advances in voice-to-text programs and machine-learning tools that can capture spoken words and suggest intelligent responses. Many of these, accessible at literally the flick of a wrist on many smartwatches and phones, have changed not just how we write but also our role as writers.

Photo by Austin Distel on Unsplash

Finally, our idea of literacy itself is expanding. It’s more than just about knowing to write; it’s about being able to express information creatively. Children need to not only be adept at computing but also at finding information online, crafting persuasive content, and, while all of doing this, protecting their information trails. This requires two additional skills: digital literacy and cyber hygiene. The former equips them with information assessment skills, so they can find the right information and protect against disinformation. The latter instils digital safety skills, so they can’t be manipulated online and their information isn’t compromised. Both are essential for thriving in the virtual world where most of them spend their waking hours, even more so now since the pandemic.

Children are already familiar with an alphabet soup of online service before they step into a classroom. These skills are, thus, best introduced in their formative learning, not in middle school and college where they are presently taught. This will ensure that the next generation is equipped to transmit information with even greater accuracy and creativity all the more sooner — an advantage that will accrue to them and to our society as a whole. The first step towards this involves mastering the QWERTY keyboard.

  • A version of this post appeared here:
  • **Photo source

COVID-19’s Lessons About Social Engineering [Published in Dark Reading]

Photo by Brian McGowan

Unless we do something proactively, social engineering’s impact is expected to keep getting worse as people’s reliance on technology increases and as more of us are forced to work from home.

Contact tracing, superspreaders, flattening the curve — concepts that in the past were the domain of public health experts are now familiar to people the world over. These terms also help us understand another virus, one that is endemic to the virtual world: social engineering that come in the form of spear-phishing, pretexting, and fake-news campaigns.

As quickly as the coronavirus began its spread, news reports cautioned users of social engineering attacks that tout fake cures and contact-tracing apps. This was no accident. In fact, there are a number of parallels between the human transmission of COVID-19 and social engineering outbreaks:

  1. Just like coronavirus transmits from person to person through respiratory droplets, social engineering also passes from users through infected computing devices to other users. Because of this transmission similarity, just as infected people, by virtue of their physical proximity to many others, act as superspreaders for COVID-19, some technology users act likewise. These tend to be people with many virtual friends or those subscribing to many online services who consequently have a hard time discerning a real notification or communication from one of these personas or services from a fake one. Such users are prime targets for social engineers looking for a victim who can provide a foothold into an organization’s computing networks.
  2. The vast majority of people infected with this coronavirus have mild to moderate symptoms. The same is the case with most victims of social engineering because hackers usually lurk imperceptibly as they make their way through corporate networks. They often go undetected for months — on average, at least 101 days— showing no signs or symptoms.
  3. Just as no one has immunity from COVID-19, no one is immune against social engineering. By now everyone, all over the world, has been targeted by social engineers, and many — trained users, IT professionals, cybersecurity experts, and CEOs — have fallen victim to a spear-phishing attack.
  4. COIVD 19’s outcomes are worse for people who have prior health conditions and for people who are older. Similarly, the outcomes of social engineering are worse for users with poor computing habits and poor technical capabilities. Many of these tend to be senior citizens and retired individuals who lack updated operating systems, patches that protect them from infiltration, and access to managed security services.
  5. Finally, personal hygiene — hand washing, use of masks, social isolation — is the primary protection against coronavirus infection. Likewise, for protecting against social engineering, digital hygiene — protecting devices, keeping updated virus protections and patches, and being careful when online — is the only protection that everyone from the FBI to INTERPOL has in their arsenal.

But beyond these similarities, social engineering outbreaks are actually harder to control than coronavirus infections:

1. Social engineering infections pass through devices wirelessly, making it hard to contact-trace infection sources, isolate machines, and contain them.

2.  There are well-established scientific processes that the medical community has developed to identify knowledge gaps about coronavirus. This helps researchers focus. In contrast, even the fundamentals of social engineering — such as when it’s correct to call an attack a breach or a hack — lacks clarity. It’s hard to do research in an area when there is no consensus on what the problem should be called or where it begins and ends.

3. While human hygiene is well researched, digital hygiene practices aren’t. For instance, in 2003, NIST developed password hygiene guidelines asking that all passwords contain letters and special characters and are changed every 90-days. The guideline was developed studying how computers guessed passwords, not how humans remembered them. Consequently, users the world over reused passwords, wrote them down on paper to aid their memory, or blindly entered them on phishing emails that mimicked various password-reset emails — until 2017, when these problems were recognized and the policy was reversed.

4. Evidence points to those who have recovered from coronavirus having at least short-term immunity to it. In contrast, organizations that have had at least one significant social engineering attack tend to be attacked again within the year. Because hackers learn from every attack, this suggests that the odds of being breached by social engineering actually increase with each subsequent attack.

5. Our response to COVID-19 is informed by reporting throughout the healthcare system. Unfortunately, there is no similar reporting mechanism for social engineering. For this reason, a hacker can conduct an attack in one city and replicate it in an adjoining city, all using the same malware that could have easily been defended against had someone notified others. We saw this trend play out in ransomware attacks that crippled computing systems in Louisiana’s Vernon Parish in November 2019, quickly followed by six other parishes, and continuing through the rest of the state in February 2020.

Because of these factors, the economic impact of social engineering continues to grow. There has been a 67% increase in security breaches in the past five years, and last year companies were expected to spend $110 billion globally to protect against it. This makes social engineering one of the biggest threats to the worldwide economy outside of natural disasters and pandemics.

Just as we are fighting the pandemic, we must coordinate our efforts to combat social engineering. Without it, there will be no vaccine or cure. To this end, we must develop intraorganizational reporting portals and early-warning systems to warn other organizations of breaches. We also need federal funding for basic research on the science of cybersecurity along with the development of evidence-based digital hygiene initiatives that provide best practices that take into account the user and their use cases. Finally, we must enlist social media platforms for tracing the superspreaders in their users, and develop open source awareness and training initiatives to protect them and the cyber-vulnerable from future attacks.


Unless we do something proactively, social engineering’s impact is expected to keep getting worse as people’s reliance on technology increases and as more of us are forced to work from home, away from the protected IT enclaves of organizations. We may in the end win the fight against the coronavirus, but the war against social engineering has yet to begin.


*A version of this piece appeared in Dark Reading:

Improving Everyone’s Ability to Work from Home After the Pandemic [Published in IPSwitch]

Photo by Corinne Kutz

Two out of three Americans with jobs are already working from home because of the pandemic. Many will have to continue if pandemic reoccurs. But millions are unable to and are without jobs, because of significant barriers imposed by technology, regulation, and organizational preparedness.

One technological barrier is the lack of universal high-speed Internet connectivity. People at home today run multiple devices for everything from making video calls to streaming entertainment, participating in meetings, and doing classwork. This requires fiber-based Internet access that allows gigabyte-speeds rather than older cable and telephone based connectivity.

But outside of major American cities, most of us are served by poor quality and lower speed Internet services built on outdated infrastructure. The reason is that in most market areas, legislative barriers have limited competition, keeping the cable and telephone companies as virtual monopolies that can charge higher prices whilst continuing to invest little in improving product quality.  Because of this, many in rural areas, the urban poor, and consumers in many smaller urban areas either don’t have good access, cannot afford it, or have limited choice.

Another technology barriers to remote work is the outdated software and operating systems that many companies utilize, which are incompatible with what people use at home. For instance, close to 82% of medical imaging devices in US hospitals still run Windows 7 and XP-based systems. There are about 200 million computers worldwide still running such outdated systems including 30,000 machines in Germany’s local government offices and 50,000 in Ireland’s healthcare system. The reason for such practices is legacy programs, those that can only run on older operating systems, that many organizations continue to support. But, because of such systems, people whose work relies on such older programs cannot work on them remotely from their updated computing devices at home.

Yet another barrier comes from data protection laws. From HIPPA that governs electronic patient health information (ePHI) access to the European Union’s data portability laws, various regulations protect user data from cyber criminals by restricting access to them outside of secure work computers and servers. But these laws were formulated in the pre-pandemic era, where employees had the luxury of working from offices. Layered on such laws are organizational IT policies, which often impose their own restrictions on how employees can access data.

Photo by Glen Carrie

But it is because of such restrictions that Facebook’s content moderators all over the globe cannot presently work from home—which has also reduced their ability to quell misinformation and online scams from going viral. Similarly, concerns of cyber breaches have led organizations to require their employees use virtual private network (VPN) services when connecting from home. Using a VPN is hard enough for users with poor technology skills, but even for the technologically adept, it lowers Internet speeds, especially when there is a signification increase in the load on VPN servers, as is now the case. Thus, regulatory concerns cause restrictions and delays that make for a frustrating remote work experience.

The final factor limiting remote work is cyber risk from the user. While many users can be trusted with remote data access, many others cannot. This is not just because some people have lower technical skills but also because many users’ digital hygiene levels are unknown. This is a pivotal issue because regulations such as HIPAA require organizations to conduct risk assessments to address vulnerabilities from remote data access. But this is easier said than done. In an era when the opening a single phishing email could launch ransomware that could jump from home to work networks and cripple the entire organization’s systems, the risk to the enterprise is not just from the employee working at home, but from their entire family. Hence, organizations would rather limit who can work remotely than risk a devastating enterprise-wide lockout.

Photo by Corinne Kutz

Making it possible for more of us to work remotely from home will require a concerted efforts from the government, educational institutions, and organizations.

The starting point to this is improving residential Internet access. The digital divide is no longer about just having Internet access, but having universal access to fiber at an affordable price. With 5G years away from being universal, we have to reimagine competition among Internet providers. This involves removing the legislative restrictions that prohibit competition among providers and, in some cases, fiber networks being developed by municipalities. A good example is Chattanooga, Tennessee, where the local government developed its own fiber network, which not only made gigabyte speed service locally available for a competitive price but also recovered the setup costs and led to a technology start-up boom in short order.

Next, organizations must plan on developing an agile workforce. Most current organizations support a fraction of their workforce’s remote work needs. For instance, the US Airforce VPN system is built to support only a quarter of its 275,000 civilian workers and contractors. Organizations can invert this by investing in virtualization to run legacy software, allowing more employees to bring their own devices (BYOD), and moving towards a cloud-based infrastructure. This will create the ability to run legacy software on remote machines while also quickly upgrading the technology being used within organizations.

The final issue is reducing cyber risk from users. Models for this already exist in the systems used for evaluating financial credit scores and giving automobile driver’s licenses, which were developed for similar reasons—to estimate risk and ensure that people meet minimal standards of performance and safety. Just as we do with driver’s licenses, we need to establish federal standards for user risk assessment that mandates cyber safety training and awards users with a personal cyber risk score. Cyber safety training must begin from K-12, when most already use computing systems, and become part of standard university curricula. Also, the risk scores should be portable between jobs, accessible to employers, and users should be capable of improving them through additional certifications provide by for-profit training companies. With everyone trained, the overall cyber risk to organizations from users will reduce as will their concerns about remote access.

Providing better Internet access, creating an agile workforce, and mandating cyber security training will help us combat not just this pandemic’s reoccurrence but also any future natural or manmade catastrophe. We have been saved from a complete economic meltdown by a technology—the Internet—that was built in anticipation of a nuclear fallout that thankfully never happened. Thanks to such forward thinking, we today have the capacity to continue working, teaching, even performing medical diagnosis online. Building capacity must likewise be done years if not decades in advance and we must prepare for a future where more people can continue working from home.




* A version of this post appeared here:

Stopping the Dark Triad from impacting our response to COVID-19 [Published in IPSwitch]

Last week, New York City Mayor Bill de Blasio warned residents of a widespread Twitter and text-message circulated misinformation campaign falsely claiming that Manhattan was under quarantine.

Around this time, Attorney General Barr and U.S. Attorneys from various states were also  warning residents of spear phishing emails, fake websites, local phone area code or neighbor- spoofing calls, and text messages making all manner of fake claims. Some of them were offering free COVID-19 tracking apps only to inject malware; others were spoofing websites such as Johns Hopkins University’ website and providing false information; some others were emailing, calling, and texting residents offering free iPhones, groceries, treatments, cures—and whatever else—preying on our collective anxieties during this pandemic.

And this wasn’t just happening in the US. Similar attacks being reported in Australia and the European Union.

There is actually a common thread that connects all these attacks. They are all part of what I call the Internet’s Dark Triad: hacking, misinformation, and trolling– three types of attacks that usually feed off each other and, when working in concert, are especially potent.           

We saw the triad at work together during our last presidential election when the Russians hacked into the DNC, used the stolen data to seed misinformation websites, and organized a trolling campaign to reframe, retweet, and relentlessly disseminate the information throughout the US.

While it is easy to think of each of these types in isolation, thinking of them as parts of a whole makes it easier to appreciate their impact–and more importantly, deal with them.

For one, the triad feeds on fake profiles on social media, neighbor phone numbers, and email addresses. For instance, in Erie County, NY, someone impersonated a local TV station and tweeted fake news about the virus. Such attacks are very easy to foment, given the easy access everyone has to social media and VoIP services.

We have left the responsibility of curating content and profiles to individual media organizations, almost all of who have resorted to internal processes. Their process involves some automation but, given the nuanced and equivocal nature of content, they largely rely on human curators, who they have employed by the thousands.

But even during normal circumstances, as in the days before the pandemic reached our shores, the process was found lacking. Now, many content curators are at home and most aren’t even allowed to do their work because of the offensive, sensitive, and graphic nature of content they deal with. This means at the time when social media matters the most for users, its content is most vulnerable to misuse.

Instead of leaving this problem in the hands of individual social media organizations who are all creating organizational silos of vital information, we need these organizations to come together and coordinate their efforts. Media organizations should create a centralized data repository in which they pool their profile and content data. This database should be accessible to researchers and other media organizations, especially the regional and local media house that don’t have the depth in technical skills or manpower to keep a track of ongoing attacks. Having a centralized repository of profiles and phones being spoofed would allow us to identify attacks before they become widespread and to inform local agencies and residents.

Two, in his press release, Attorney General Barr asked Americans to report COVID-19 related cyber-attacks to the National Center for Disaster Fraud (by calling 1-866-720-5721 or by e-mailing But there are already several other federal and local agencies collecting similar reports. This includes the FTC and the purpose-built reporting portal of the FBI’s IC3, among many others.

Having users report on various portals needlessly duplicates efforts, not to mention wastes resources and confuses users. These efforts also need to be unified. Just as social media profiles and phone numbers are reused, so are spear phishing email accounts, their persuasive ploys, and the malware they carry. Centrally collecting reports and developing a consumer-focused information portal allows us to track attacks, identify the ones that are most virulent, and provide support to users—all of who are working from home networks, without the benefit of professional IT support.

Finally, at a time of anxiety, people turn to others for information and social support. It is, therefore, our responsibility to ensure that we don’t forward along false information—and give the oxygen necessary for the Dark Triad to function. It is important that we become vigilant about the information we encounter on our media feeds. We need to check the sources of information we receive, search online for other corroborating information, report malicious activity we encounter, and become responsible content curators for others in our sphere of influence.

We will, with our collective efforts, overcome this virus. For now, it’s our individual responsibility to ensure that neither the virus nor the Dark Triad succeeds.






*A version of this post appeared here:

It’s 2020: Do we need more cyber hygiene? [Published in InfoSecurity Magazine]

Photo by Bermix Studio

This month we learned that a US maritime base had to be taken offline for more than 30 hours because of a ransomware attack that interrupted cameras, doors, and critical monitoring systems. It’s not the first such attack, and it’s most definitely not the last.

Following it will be the usual drumbeat:  a call for more cyber hygiene. Cyber hygiene was last decade’s elixir for protecting against all cyber incidents, from Ring camera hacks to ransomware. It appeared in congressional testimonies, policy documents, and countless websites—16 million when I last searched.

But, judging from the continuing news of breaches and calls for more of it, it appears we never have enough of it. Or do we?

The answer to this question is actually hard to find. The reason being that no website tells you how much cyber hygiene you need, or whether you have enough.

Most begin by comparing cyber hygiene to personal hygiene—the cyber equivalent of washing your hands—to dole out some “always do this” advice—such as always use long, complex passwords (with uppercase and non-alphabetic characters) to ensure cyber safety. Herein lies the problem, and the reason why we haven’t achieved cyber hygiene yet.

The fact is that cyber hygiene is nothing like personal hygiene. Over the centuries, our bodies have  evolved outer and inner defenses, from hair and skin to white blood cells. This is why we can fend off all manner of germs, despite the fact that everyone from healthcare professionals to food service workers inadequately wash their hands.

Photo by Petter Lagson

In contrast, the components of computers are dumb circuits, many with flaws and without protections.  In 2018, we learnt of defects in every computer chip manufactured in the last two decades and there are many more vulnerabilities in the external sensory organs of computers (keyboard, camera, microphone), and in applications and operating systems.

Any of these can be compromised using malware deployed in spear phishing emails, and all it takes is a single inadvertent click on the email to cripple an entire corporation. Cyber hygiene doesn’t afford us the same room for error that personal hygiene does.

It gets worse: while there is little bad that can come from hand-washing, blindly following a cyber hygiene best practice can be harmful to your cyber health. For instance, many users are told to look for SSL icons (the green padlock) on their browsers next to a website’s name to assess its veracity, but aren’t told that many phishing websites – two out of three is a recent survey – also possess these icons.

Many such purported best practices are poorly developed, often without considering their real-world use environments. Such was the case with the 2004 NIST guideline advocating complex passwords, which was based on how easy it was for computers to crack them rather than how people remembered passwords. Users, constantly bombarded with password change requests, began reusing passwords and became accustomed to getting password change requests—something that hackers mimicked in spearphishing attacks.

The NIST guidelines were reversed in 2017, but by then countless compromises were likely caused by it. We cannot afford another decade of such missteps.

To begin, we have to stop espousing broad cyber hygiene best practice suggestions without testing their need and efficacy in real user environments.

Photo by Petter Lagson

Second, we need to move away from asking people to just do things that keep them safe to explaining why. Be it two-factor authentication or the application of software patches, every best practice has its limits and can be a conduit for compromise, and users must be informed of these.

Third, we need to reorient our fundamental view of cyber hygiene. One area that can serve as a model is Operational Security (OPSEC), a methodology developed by the US military during the Vietnam War to protect critical information from getting into the hands of the adversary. OPSEC helps users assess which information is critical based on what it could reveal, and then instructs users on ways to protect it.

Some of these principles are readily applicable in areas such as election security, where the US military is already training state and local officials. We can apply the same process for our cyber safety, moving away from following broad cyber hygiene guidelines to focused practices designed to protect critical personal information.

Finally, we must stop doling out cyber hygiene advice without measuring who needs it or how much of it they need. Recent user research has developed a Cyber Hygiene Inventory, a 20-question survey that measures different facets of user cyber hygiene and provides users with a 0–100 cyber hygiene score. The score can be used as a baseline to assess how much cyber hygiene users across an organization or even a region need and track how well they have progressed towards acquiring it.

If the last presidential elections taught us anything, it is that cyber security is intrinsically linked to the functioning of our democracy. In 2020, let’s resolve to stop asking for more cyber hygiene and start working towards everyone finally having it.


*A version of this post appeared here.

How much cyber hygiene do you need? [Published in Medium]

Cyber hygiene: the term that is evoked whenever there is a threat to our infrastructurea ransomware attack, or any data breach. It appears to be that elusive thing users never seem to have enough.

But how does one get this cyber hygiene? Better yet, do we even know what it means? Or how much of it we need?

Photo by freestocks

I searched wide for the answer and was surprised to find no answer. In fact, I ended up with far more questions.

Because although the term appears thousands of times on various webpages, usually followed by some avowed best practice suggestions on what users should or shouldn’t  do online, none explain where these suggestions came from or whether doing what was suggested actually helps.

Besides, there exists no measurements for any of this. So how does one know they lack cyber hygiene? Or where they lack it? Or if they ever achieved it?

Cyber hygiene seems like that ever elusive elixir every security expert doles out: Everyone needs to have some of it, but no one can ever have it.

I am also to blame for some of this. In early 2015, in the aftermath of the infamous Sony Picture breach, I was searching for a term that could capture what users needed to do to prevent social engineering attacks. I wasn’t satisfied with terms like “human factors” because they signified a field of study–not what an user should be doing to help protect the enterprise from being breached.

My search led to a speech by Homeland Security Secretary Janet Napolitano who, almost two years earlier, had used the term in the context of developing better user habits. I thought it was perfect. I used it in my press piece and in media interviews. The term caught on.

On the one hand it achieved my goal–drawing attention to what users had to do, but on the other, it helped cloak the problem. Soon the lack of Cyber Hygiene became the catch-all term used to blame anyone who didn’t do something–usually something that was defined after a successful breach.

Feeling responsible, I set about developing a quantitative metric for measuring cyber hygiene. My goal was to define what we meant by user cyber hygiene (and what we didn’t), identify the underlying parts of it, and create a self-report questionnaire for measuring it–so we can we tell who has it, who lacks it, what they lack, and by how much.. Among those helping me were CISOs, technologists, graduate students, and team of top notch researchers from Singapore.

Over the course of a year and a half, I conducted a series of research studies beginning with interviews of CISOs, security experts, students, and industry professionals, followed by surveys of students, CISOs, employees of a federal government agency, and general Internet users. At each stage, the survey tool, which began at around 80-100 questions, was tested, refined, reduced, and retested. It was also put through various quantitative tests, from multi-dimensional scaling (MDS) and cluster analysis to confirmatory factor analysis and various validity checks.

The final outcome of all this was a 20-question Cyber Hygiene Inventory (CHI)© that quantitatively assesses user cyber hygiene across five dimensions. The dimensions, uncovered through the analytical approach, fit the acronym SAFETY. Here the S signifies Storage and Device Hygiene, A stands for Authentication and Credential, F for Facebook and Social Media, E for Email and Messaging, T for Transmission and Y–is the reference to You or the user.

The overall scale nets a possible CHI range of 0-100, with higher numbers indicating better cyber hygiene. The CHI score provides an instant snapshot of how much cyber hygiene each user possesses. Dig deeper and you get a breakdown of their cyber hygiene within each of the five categories, helping pinpoint where the user is lacking and where improvements are necessary. Furthermore, by comparing CHI across users or groups and you get to know exactly how well an employee or group is actually doing in their cyber hygiene levels relative to others in an organization (or across an entire region or sector).

The CHI has enormous potential–from providing quantitative insights into cyber hygiene levels to helping pinpoint what is lacking, where, and by how much. For organizations with a defined cyber risk assessment program (such as those implementing the NIST Cybersecurity Framework), the CHI helps develop a more accurate user risk profile, so they can better align their resources and implement pointed interventions that improve their overall risk posture. For other organizations, the CHI provides a benchmark understanding of where they stand–a first step towards developing a user risk profile.

Now rather than blaming everyone and asking them to get cyber hygiene, or worse yet, saying cyber hygiene has been achieved because someone passed a phishing penetration test, we can know exactly how much cyber hygiene users actually possess and what they need to work on–so as to improve their own and the organization’s overall cyber resilience.

You can read more about the CHI by clicking here: LINK

© Arun Vishwanath, 2019

*A version of this post appeared here.

The troubling implications of weaponizing the Internet [Published in Washington Post]

Photo by Bermix Studio

Cyberwarfare suddenly went public late last month.

Multiple media outlets reported that President Trump had authorized U.S. Cyber Command to conduct a cyberstrike on Iran. Obviously, this isn’t the first such attack by a nation, or even by the United States, on another — the Russians, Chinese and North Koreans have their digital fingerprints on all manner of attacks here, and the U.S. government recently reportedly conducted retaliatory attacks on Russia’s Internet Research Agency for misinformation campaigns during the 2016 presidential election.

And, Iran, unarguably, makes for a deserving target: Iranian hackers were behind the 2016 incursion on the Bowman Avenue dam in New York and the massive ransomware attack that in March 2018 crippled all of Atlanta’s city government systems, and they are likely behind ransomware attacks on city government systems in Greenville, N.C., and Baltimore.

But this attack heralds a new age of Internet warfare — a likely outcome of the elevated role of U.S. Cyber Command under National Security Adviser John Bolton, who has been hinting at such a cyber offensive for a while — and is a harbinger of much more to come.

Though many previous attacks — such as the now well-known 2010 Stuxnet malware purportedly developed by U.S. and Israeli intelligence and used to damage systems controlling Iran’s fledgling nuclear program — have been widely reported on as acts of espionage, they were only accidentally discovered by security companies, never confirmed by either nation.

In contrast, this time multiple administration officials, albeit unofficially, confirmed the strike, after key White House officials such as Bolton have openly espoused the need for offensive cyberattacks, setting the stage for such actions.

So if the United States did launch this attack — and all indicators, including Iran’s telecom minister claiming that the attacks occurred but were unsuccessful, suggest that is the case — then this is a paradigm shift in the use of the Internet as an instrument of war, with likely significant consequences.

For one thing, the United States has more targets than most nations — targets that could be subject to retaliation for an attack that the government admits to carrying out. Compared to many other nations, especially adversaries such as Iran, the U.S. has more computers, more mobile and connected devices, more websites and more infrastructure that is reliant on the Internet. We also have more users going online for all manner of actives ranging from everyday communications to commercial transactions, health care management, and government operations. Much of this is exposed and vulnerable. For instance, reports from the Government Accountability Office point to thousands of vulnerabilities that remain in federal government systems, and there are many more unaccounted-for weaknesses in various state, local and corporate systems throughout the nation, which we often only learn about after a major breach.

Social-engineering attacks — phishing via email, social media, mobile and messaging —that target users directly continue to grow in intensity and sophistication. Not only is U.S. exposure to such attacks significantly greater, because we have many more users, but we also not found an effective defense against them.

Another problem is that the attack tools developed by our intelligence agencies tend to become sought-after targets for other nations that don’t have the technical depth to develop their own. This has been the case with past tools, such as Eternal Blue, developed by the National Security Agency, which was stolen and leaked by a hacker group and subsequently used by North Korean hackers to create WannaCry — the massive ransomware attack that crippled millions of computers in more than 150 nations in a matter of hours. That desire to match U.S. capabilities will only be worse after an officially confirmed attack.

After an incident like this one is made public, nations often become increasingly paranoid and engage in riskier actions to protect against attacks. For instance, shortly after the SEALs killed Bin Laden in Pakistan, their military began hiding their nuclear arsenal in unguarded delivery vans in congested civilian areas, all in an attempt to avoid being detected by our intelligence agencies. If Iran fears another cyberattack, it could simply stop using computing technology in critical areas such as protecting covert nuclear equipment, which could significantly jeopardize their safety and our ability to effectively monitor them.

Even without open cyberattacks, the United States already tends to be a convenient scapegoat for adversarial regimes wanting to distract attention away from their shortcomings. For instance, recently Venezuela’s embattled president Nicolás Maduro blamed a five-day nationwide power blackout caused by a woefully underfunded electric grid on American cyberwarfare.

Open cyberwarfare will also have a chilling effect on the continued development and use of the Internet. Already, some nations are refusing to deploy technologies developed by certain nations, while some others are attempting to develop their own software, operating systems and networks. This attack could also draw investment away from developing consumer technologies to designing cyber weapons, which will lead to a virtual arms race, with nations creating proprietary computing systems, forming closed communication networks and alliances — in essence, forming a Digital Iron Curtain.

Before things get that carried away, the world should agree that the Internet should not be used as a battlefield.

This may sound pacifistic, even far-fetched. But email, social media, search engines, even messaging platforms work better when more people use and contribute to them. As the Internet’s use worldwide has increased, so have the fortunes of the American public — who have helmed many of the virtual businesses and products that have shaped the 21st century.

The Internet is far too important to pull into warfare — not just for billions of people all over the world, but especially for Americans. The potential dangers of allowing open cyberwarfare are already clear enough. Nations shouldn’t wait until future attacks make them even clearer before they act.


*A version of this post appeared as an op-ed in the Washington Post and other publications. 

Why smartphones are more susceptible to social attacks [Published in 2019 Verizon DBIR]

Photo by Dan Nelson

Research points to users being significantly more susceptible to social attacks they receive on mobile devices. This is the case for email-based spear phishing, spoofing attacks that attempt to mimic legitimate webpages, as well as attacks via social media. [1], [2], [3]

The reasons for this stem from the design of mobile and how users interact with these devices. In hardware terms, mobile devices have relatively limited screen sizes that restrict what can be accessed and viewed clearly. Most smartphones also limit the ability to view multiple pages side-by-side, and navigating pages and apps necessities toggling between them– all of which make it tedious for users to check the veracity of emails and requests while on mobile. 

Mobile OS and apps also restrict the availability of information often necessary for verifying whether an email or webpage is fraudulent. For instance, many mobile browsers limit users’ ability to assess the quality of a website’s SSL certificate. Likewise, many mobile email apps also limit what aspects of the email header is visible and whether the email-source information is even accessible. Mobile software also enhances the prominence of GUI elements that foster action–accept, reply, send, like, and such– which make it easier for users to respond to a request. Thus, on the one hand, the hardware and software on mobile devices restrict the quality of information that is available while on the other they make it easier for users to make snap decisions.

The final nail is driven by how people use mobile devices. Users often interact with their mobile devices while walking, talking, driving, and doing all manner of other activities that interfere with their ability to pay careful attention to incoming information. While already cognitively constrained, on screen notifications that allow users to respond to incoming requests, often without even having to navigate back to the application from which the request emanates, further enhance the likelihood of reactively responding to requests.

Thus, the confluence of design and how users interact with mobile devices make it easier for users to make snap, often uninformed decisions–which significantly increases their susceptibility to social attacks on mobile devices. 

[1] Vishwanath, A. (2016). Mobile device affordance: Explicating how smartphones influence the outcome of phishing attacks. Computers in Human Behavior, 63, 198-207.

[2] Vishwanath, A. (2017). Getting phished on social media. Decision Support Systems, 103, 70-81.

[3] Vishwanath, A., Harrison, B., & Ng, Y. J. (2018). Suspicion, cognition, and automaticity model of phishing susceptibility. Communication Research, 45(8), 1146-1166.

Why do so many people fall for fake profiles online? [Published in The Conversation]

The first step in conducting online propaganda efforts and misinformation campaigns is almost always a fake social media profile. Phony profiles for nonexistent people worm their way into the social networks of real people, where they can spread their falsehoods. But neither social media companies nor technological innovations offer reliable ways to identify and remove social media profiles that don’t represent actual authentic people.It might sound positive that over six months in late 2017 and early 2018, Facebook detected and suspended some 1.3 billion fake accounts. But an estimated 3 to 4 percent of accounts that remain, or approximately 66 million to 88 million profiles, are also fake but haven’t yet been detected. Likewise, estimates are that 9 to 15 percent of Twitter ‘s 336 million accounts are fake.

Fake profiles aren’t just on Facebook and Twitter, and they’re not only targeting people in the U.S. In December 2017, but German intelligence officials also warned that Chinese agents using fake LinkedIn profiles were targeting more than 10,000 German government employees. And in mid-August, the Israeli military reported that Hamas was using fake profiles on Facebook, Instagram and WhatsApp to entrap Israeli soldiers into downloading malicious software.

Although social media companies have begun hiring more people and using artificial intelligence to detect fake profiles, that won’t be enough to review every profile in time to stop their misuse. As my research explores, the problem isn’t actually that people – and algorithms – create fake profiles online. What’s really wrong is that other people fall for them.

My research into why so many users have trouble spotting fake profiles has identified some ways people could get better at identifying phony accounts – and highlights some places technology companies could help.

People fall for fake profiles

To understand social media users’ thought processes, I created fake profiles on Facebook and sent out friend requests to 141 students in a large university. Each of the fake profiles varied in some way – such as having many or few fake friends, or whether there was a profile photo. The idea was to figure out whether one or another type of profile was most successful in getting accepted as a connection by real users – and then surveying the hoodwinked people to find out how it happened.

I found that only 30 percent of the targeted people rejected the request from a fake person. When surveyed two weeks later, 52 percent of users were still considering approving the request. Nearly one in five – 18 percent – had accepted the request right away. Of those who accepted it, 15 percent had responded to inquiries from the fake profile with personal information such as their home address, their student identification number, and their availability for a part-time internship. Another 40 percent of them were considering revealing private data.

But why?

When I interviewed the real people my fake profiles had targeted, the most important thing I found was that users fundamentally believe there is a person behind each profile. People told me they had thought the profile belonged to someone they knew, or possibly someone a friend knew. Not one person ever suspected the profile was a complete fabrication, expressly created to deceive them. Mistakenly thinking each friend request has come from a real person may cause people to accept friend requests simply to be polite and not hurt someone else’s feelings – even if they’re not sure they know the person.

In addition, almost all social media users decide whether to accept a connection based on a few key elements in the requester’s profile – chiefly how many friends the person has and how many mutual connections there are. I found that people who already have many connections are even less discerning, approving almost every request that comes in. So even a brand-new profile nets some victims. And with every new connection, the fake profile appears more realistic and has more mutual friends with others. This cascade of victims is how fake profiles acquire legitimacy and become widespread.

The spread can be fast because most social media sites are designed to keep users coming back, habitually checking notifications and responding immediately to connection requests. That tendency is even more pronounced on smartphones – which may explain why users accessing social media on smartphones are significantly more likely to accept fake profile requests than desktop or laptop computer users.

Illusions of safety

And users may think they’re safer than they actually are, wrongly assuming that a platform’s privacy settings will protect them from fake profiles. For instance, many users told me they believe that Facebook’s controls for granting differing access to friends versus others also protect them from fakers. Likewise, many LinkedIn users also told me they believe that because they post only professional information, the potential consequences for accepting rogue connections on it are limited.

But that’s a flawed assumption: Hackers can use any information gleaned from any platform. For instance, simply knowing on LinkedIn that someone is working at some business helps them craft emails to the person or others at the company. Furthermore, users who carelessly accept requests assuming their privacy controls protect them imperil other connections who haven’t set their controls as high.

Seeking solutions

Using social media safely means learning how to spot fake profiles and use privacy settings properly. There are numerous online sources for advice – including platforms’ own help pages. But too often it’s left to users to inform themselves, usually after they’ve already become victims of a social media scam – which always begins with accepting a fake request.

Adults should learn – and teach children – how to examine connection requests carefully in order to protect their devices, profiles and posts from prying eyes, and themselves from being maliciously manipulated. That includes reviewing connection requests during distraction-free periods of the day and using a computer rather than a smartphone to check out potential connections. It also involves identifying which of their actual friends tend to accept almost every friend request from anyone, making them weak links in the social network.

These are places social media platform companies can help. They’re already creating mechanisms to track app usage and to pause notifications, helping people avoid being inundated or needing to constantly react. That’s a good start – but they could do more.

For instance, social media sites could show users indicators of how many of their connections are inactive for long periods, helping people purge their friend networks from time to time. They could also show which connections have suddenly acquired large numbers of friends, and which ones accept unusually high percentages of friend requests.

Social media companies need to do more to help users identify and report potentially fake profiles, augmenting their own staff and automated efforts. Social media sites also need to communicate with each other. Many fake profiles are reused across different social networks. But if Facebook blocks a faker, Twitter may not. When one site blocks a profile, it should send key information – such as the profile’s name and email address – to other platforms so they can investigate and potentially block the fraud there too.

[A version of this article appeared on The Conversation]