Cybersecurity Musings

“Users should use a range of letters, numbers, and special characters on their passwords and change it every 90 days.” If you are in IT, you have likely implemented this security policy. And if you are a user, you have likely endured it.

The source of this best practice suggestion is a Burr, Dodson, and Polk (2004)[i] NIST publication, which Microsoft and others widely publicized and implemented[ii]. Only, this practice has many critical flaws: it forces users to come-up with difficult passwords, often, so they end up reusing passwords across services; and it makes password reset emails common—so when a phishing email comes in asking to reset a password, users are far more likely to comply. Recognizing this, NIST reversed the policy in 2017, but by then, IT managers all over the world had blindly followed the best practice for more than a decade.

Cyber hygiene practice suggestions such as this, however, do not end here. There are many more. At the broad end are suggestions such as “develop a process for software installation for end users” or the ever relevant “educate your users on good cyber behavior.” While at the specific end are ideas such as “always use a VPN when connecting to networks,”  “always look for SSL (lock) icons on webpages,” “always look for source headers in emails to find out who is sending you an email,” “always use a password vault,” “always use a good virus protection programs,” and “always apply patches and keep your system software updated.” All follow a familiar pattern albeit with varying levels of specificity: they expect the user to blindly perform an action, all the time, when online.

But are these blanket suggestions really appropriate? Are they even effective, let alone necessary to do in all cases, across all organizations, by every Internet user around the world?

Answering such questions might appear unnecessary, but there is a cost involved in asking computer users to check various parts of an email’s header for each email they receive, to use a VPN, or to manage their passwords in vaults. The costs are not just in their time but also in the technical IT resources that go into supporting such practices, not to mention the aforementioned issues of users becoming habituated in flawed practices, which could increase their vulnerability to cyber compromise.

Whenever such criticisms are raised, cyber security experts resort to conceptual analogies, drawing parallels between cyber hygiene best practices and personal hygiene, to justify their suggestions. The usual argument is along the lines of “just like washing hands, brushing teeth, or regularly taking multivitamins,” “users should do this…” and besides “just like personal hygiene, there is also no real harm in following cyber hygiene best practice guidelines.”

But if we have learnt anything from research on public health, it is that not all suggestions are good. This is the lesson from the widespread intake of multi-vitamin pills as well. While most people believe vitamins are necessary or that there is no harm in taking them, medical research disagrees. After reviewing multiple large-scale tracking studies, the medical community concluded that vitamins have little to no effect whatsoever on reducing heart disease, cancer, cognitive decline, or memory loss. In fact, some, such as vitamin E and beta-carotene supplements, are downright harmful and reduce life expectancy instead of improving life.[iii]

Of course, there are exceptional times where vitamins are good or even necessary. Certain people—pregnant women, people living in certain regions, people suffering certain health ailments—might need a course of vitamins.[iv] These conclusions are supported by research and are based on a case-by-case assessment of the person’s needs.

The same is true for cyber hygiene best practices. Not all work, but some do. But what works, and the specific instances—organizational type, use environments, use cases, and user types—need to be ascertained. These need to be empirically determined and evaluated for their need and contextual adequacy. Doing so is far better than blindly implementing hygiene practices on the advice of sundry sources, without assessing their applicability, only to realize years later that it was not only a wasted effort but that it also made the organization more vulnerable to cyber-attacks.

The paper presents a better approach. It begins by examining the basic concept of cyber hygiene, a term that is widely used but poorly understood or conceptualized. Next, the paper tracks the roots of the concept of cyber hygiene and discusses the pitfalls of comparing it to personal hygiene. Following this, the paper presents a recently developed measurement tool called the Cyber Hygiene Inventory (CHI) and discusses how it can serve as a framework for developing need based cyber hygiene practices.

What is cyber hygiene?

In early 2015, in the aftermath of the Sony Pictures Entertainment hack while writing a media article on how we can prevent cyber breaches, I was searching for a term that captured what online users could do to better protect organizations from such attacks. My search led me to a 2013 Wilson Center speech by then Homeland Security Secretary Janet Napolitano who had used the term “cyber hygiene” in the context of cyber habits. [v] I thought the term was perfect because it helped drive home the message that protecting the Internet was every user’s personal responsibility.  I used the term in my article[vi] and in many others, with one local newscaster during an interview even commenting on the term’s simplicity and catchiness.

Thanks to its appeal, today the term is so common that a keyword search on Google returns over 33 million pages with the phrase cyber hygiene. It has appeared in public policy documents, military doctrines, congressional testimonies, media articles, research papers, and websites. All subscribe to some definition of what cyber hygiene entails and espouse all manner of best practice guidelines. Some of these guidelines target adolescents, others are for employees, some others focus on IT professionals, and still others on vulnerable populations.

But while there are many suggestions on what constitutes cyber hygiene, there is little clarity on what it does or does not entail and who it should be performed by.  This is a problem across the globe. In comparing cyber hygiene practices across member nations, the European Union Agency for Network and Information Security (ENISA) found that there was no single standard or commonly agreed upon approach to it. The report also concluded that cyber hygiene should be viewed in the same manner as personal hygiene in order to ensure the organization’s health was in optimum condition. (ENISA December 2016). (https://www.enisa.europa.eu/publications/cyber-hygiene/at_download/fullReport). Thus, there is no clarity on what cyber hygiene means or entails other than the view that it is something akin to personal hygiene. But while it’s unarguable that cyber hygiene is important, is it really appropriate to think of it in terms of personal hygiene?

Is Cyber Hygiene Analogous to Personal Hygiene?

The metaphorical construction of cyber hygiene as similar to personal hygiene does not stop at its definition. It even influences how cyber safety solutions are framed. For instance, many cyber security websites use examples of hand washing and multivitamin used to drive home cyber safety suggestions, such as applying virus updates and patches. Some sites go even further. One in particular, “Cyber Security is Cyber Health”[vii] equates poor heredity in people to the use of obsolete software; the lack of vaccinations to the lack of technical safeguards; and promiscuous sex with visits to unreliable websites. It makes similar conceptual leaps linking pregnancy, fetal ultrasound, newborns, even psychological health, with some sundry facet of cyber hygiene.

Thinking in this manner adversely influences the solutions we develop. Take the case of airplane technology. Since antiquity our mental models of flying were based on avian flight because the flying capabilities of birds were visible and self-evident. From the ancient Greek fables of Daedalus and Icarus mythologizing the use of bird-like wings for human flight to 20th century attempts at fabricating aircraft’s wings that flapped, this analogous thinking stymied the development of aircraft technology for over two millennia. Figure 1 is the 1857 patent drawing of pioneering aviator Jean Marie Le Bris’s failed Artificial Albatross.[viii] It shows how the avian model proved to be a proverbial albatross in aircraft design. Thus, the analogies we use for thinking about cyber hygiene matter.

Figure 1. Patent drawing of pioneering aviator Jean Marie Le Bris’s Artificial Albatross

There is another reason for unbridling cyber hygiene from our mental models of personal hygiene. Personal hygiene does not have a downside. Washing hands or brushing teeth, unless you do it at an obsessive level, does not cause problems to people. But using a certain app or an operating system thinking it is protective could enhance risk, especially if we trust such systems. For instance, telling people to believe that “an SSL website is secure” is just bad policy not only because many fraudulent websites also have legitimate SSL certificates but also because users conflate security with safety, wrongly thinking secure sites are authentic sites.[ix] Making such wrongful thinking even more problematic is the fact that more and more phishing websites—two out of three according to a recent Anti Phishing Working Group (APWG) report—have SSL certificates.[x]  Users need not compulsively enact behaviors based on such flawed beliefs. All it takes is for them to enter their credentials on what they purport is an encrypted page on one of these phishing websites for a breach to occur.

The same problem plagues us if we place too much credence in a solution, again, something we do not really think about in our physical hygiene. Believing that a virus protection solution is protective or that all its updates that appear as notifications are necessary, making users blindly apply patches. Unfortunately, many social engineering attacks mimic software and virus protection updates, which users wittingly download and apply because they have been conditioned to behave as such. In this way, cyber hygiene practices can make users more rather than less vulnerable.

But there is yet another important difference between the personal and cyber realms stemming from what they protect. Personal hygiene protects the human body from chance infections through routine preventative actions. The human body is, however, already resilient. Even without many modern hygiene solutions such as hand soap, humans can ward off many threats. The central reason for this is defenses against most germs and viruses we have evolved over millennia. Our sensory organs have evolved follicles, hair, nails, eyelashes, cilia, and mucous membranes that trap most intrusion. Our internal organs likewise have also evolved complex immune responses that work independently of our need to manage or control it. These internal and external defenses work in tandem and independently when needed and are further protected by the human brain (such as when someone impulsively swats a stinging bug). Thanks to these complex systems, most of us can live relatively long disease-free lives with minimal need for modern medicine.

In contrast, while technology is collectively capable of highly sophisticated computational tasks, its core components are dumb circuits that built without any effective protection and often flawed at their very core. Take computer processing chips and memory cells, the computer’s internal organs, for example. Last year, the identification of Meltdown and Spectre vulnerabilities demonstrated that nearly every computer chip manufactured in the past two decades have critical flaws in their algorithmic structures, rendering them vulnerable to various exploits. Similarly, dynamic memory cells or D-RAMs are also vulnerable to leaking their electrical charges as they interact—called the rowhammer effect[xi]—which can be exploited in a D-RAM attack to get root access to systems.

The same is the case for the “sensory organs” of computing devices:  touchpads, microphones, cameras, and input devices. Each is easily corruptible using simple keyloggers and other programs. Layered on these are many apps, all using different schemes and privileges that interface with the system’s internal organs. Some of these apps are programmed poorly, others are rouge programs built to affect compromises by co-opting their privileges, while still others can be manipulated by rouge programmers using malware that can infect everything from the sensory organs of the computer all to way to its internals. Finally, we have users with varying skills who utilize these systems and programs on them in a multitude of ways.

Making things particularly different, a single computing attack can cripple multiple layers of computing without needing to evolve a compromise for each layer. As a case in point, a single phishing email with a malware payload can trick users, circumvent many end-point security protections, and enter the core of a system and gain a foothold. In contrast, even influenza, one of the most lethal and persistent biological viruses, which kills over 600,000 people globally each year, requires a complex series of interactions. Over two-thirds of deaths from it are because of indirect causes such as organ failure.[xii]

Thanks to all this, human hygiene practices can accommodate a wide amount of variance in outcomes. In contrast, errors in individual cyber hygiene practices can have a geometric increase in overall risk because the system risks exponentially heighten at every iteration. For instance, a 10 percent failure rate in hand-washing rates does little to increase infection from most diseases. In contrast, a 10 percent failure rate in SSL certificates could lead to enhanced risk by itself. If these certificates are used in email-based phishing attacks with a 10 percent relevance rate (users for whom the content is relevant), on an email network that allowed 10 percent of these emails through, with just 10 percent of the users clicking and enabling the malware, the probability of a breach goes up to 34 percent.[1] These are conservative probabilities because in actuality 30 to 70 percent of phishing emails are usually opened (Caputo et al. 2013)[xiii] and there are many rouge SSL certificates and pages on the Internet.[xiv] Thus, each potential failure magnifies the overall failure rate, something which seldom occurs in human beings because of the way evolution has helped us defend ourselves.

What is clear from this is that hygiene in health and cyber hygiene are not analogous. Differences stem from the nature of computing, online threats, and users—all of which cumulatively increase the risk of a breach. Because of this, we cannot afford the same leeway with cyber hygiene that we can with personal hygiene. We need greater precision in how we define cyber hygiene and identify policies.

So what is user cyber hygiene?

Until recently, there have been few academic attempts at defining cyber hygiene. By comparing various definitions, through interviews with IT personnel, CSOs, CIOs, and using a quantitative scale development approach, Vishwanath et al. (2019) developed a conceptual definition and a multi-item inventory for measuring cyber hygiene. They define cyber hygiene as the cyber security practices that online consumers should engage in to protect the safety and integrity of their personal information on their Internet enabled devices from being compromised in a cyber-attack (Vishwanath et al. 2019).[xv]

At the operational or measurement end, user cyber hygiene comes from the confluence of four user-centric factors: awareness, knowledge, technical capacity, and the enactment of cyber security practices. Awareness and knowledge make up the cognitive factors of familiarity and understanding. Technical capacity pertains to the availability of technologies where necessary. Finally, enactment makes up the behavioral dimension and is the utilization factor. Effective user cyber hygiene occurs at the confluence of these four factors: when users, aware of what needs to be done, are knowledgeable about it, have the required technologies and know-how to achieve it, and enact it as and when necessary.

Vishwanath et al. (2019) also developed a framework that can be applied across multiple organizational and user environments. It is organized using a multi-dimensional inventory called the Cyber Hygiene Inventory (CHI).[xvi] The CHI comprises 20-items or questions that tap into five dimensions of user cyber hygiene. These dimensions are organized using the acronym SAFETY, where S pertains to Storage and Device hygiene, A signifies Authentication and Credential hygiene, F signifies Facebook and Social Media hygiene, E pertains to Email and Messaging, T for Transmission hygiene, and Y stands for “you” signifying the users responsibility in ensuring cyber hygiene. Each item or question in the inventory measures a best practice or a cyber safety related thought or action. While the inventory has a finite set of 20-items, it allows for the addition of questions that are often necessary to capture contextual or organization-specific practices.

Before delving into the details of the inventory, some facets of the inventory need highlighting. First, the framework provided by CHI is broad and technology agnostic. This has two advantages: it allows the CHI to be applied across any organization, user group, and even residents of an area. Second, having a broad inventory makes it possible to use it across platforms, technologies, applications, over different points of time even as different platforms and functionalities evolve. Third, we can measure most questions in the CHI using standard survey approaches. Fourth, the CHI accommodates subjective and objective measurements. While knowledge, capacity, and behavioral intent, can all be measured subjectively, we can also measure them using objective measures using a knowledge test, by taking an inventory of technologies available in the organization, and by measuring actual behavior observationally. Using a combination of measurement approaches has the added advantage of eliminating confounds such as method bias from influencing the results. Finally, the CHI includes measures of cognitive and behavioral factors. This is superior to extant approaches such as using pen-testing data or training data, which only capture behavior. Thus, the CHI captures information about user related to their cyber hygiene with more granularity, accounts for more user-level influences, and allows for more valid measurement of users’ cyber safety related thoughts and behaviors.

In the ideal case, the CHI can be used to evaluate all 4 aspects of cyber hygiene—awareness, knowledge, capacity, and enactment—using a 0-5 scale. This gives each dimension a range of responses from 0 to 100, making it possible to derive a cumulative score that is easily interpretable and comparable across the inventory’s implementations. Thus, at a minimum, the score can compare awareness against knowledge, know-how, and intent among users within an organization. Using the score comes with all the usual caveats: the score is inherently ordinal but being treated as a ratio; technical know-how is contingent on IT supplying them; the responses on some enactment frequency questions are limited by the technology, application, and platform. Most of these are familiar to anyone trained in empirical social science research, and can be handled through design and analysis.

Thus, the CHI provides a baseline for IT managers not just for understanding users but also for strategic decisions. Often, IT managers hoping to implement various hygiene solutions need to determine their relative impacts and merits. In such instances, the CHI can help ascertain the strategic merits of the intervention and the values different technological solutions they plan to implement. Figure 2 provides an exemplar where 20 items were added to the CHI’s 20 and the overall 40 items were scored on 2-dimensions: their utility or security impact and the perceive ease of using the technology, two fundamental dimensions that information systems models such as the Technology Acceptance Model (Davis et al., 1989)[xvii] have shown to predict the adoption and use of technology within organizations.

Responses by a sample of IT managers within an organization were used to develop the two-dimensional map in the figure. The map presents the overall data in four dimensions, arrayed based on the utility and ease of use of each hygiene practice. The four quadrants in the map are High security significance/utility, Low enactment difficulty; Low security significance/utility, Low enactment difficulty; Low security significance/utility, High enactment difficulty; and High security significance/utility, High enactment difficulty. Based on the map, IT managers can not only quantify the perceptual importance of each cyber hygiene practice and the technology that is most closely associated with it, but also understand the relative effort in terms of resources and expected outcomes from each of them. They can thus, using this approach, strategically choose the cyber hygiene practice and technology they plan to implement.

Figure 2. Sample application of the CHI framework to make strategic decisions on organizational cyber hygiene priorities

The CHI can also be used to track the success of individual interventions and improvements in desired levels of cyber hygiene overtime. For this, IT managers can implement the CHI to compare different facets of cyber hygiene—e.g., comparing awareness with utilization at different points in time, such as before and after an intervention; or on different groups, e.g., different divisions of the same organization; or different locations, e.g., one branch of an organization serves as the control while another one in a different location serves as the target. The analysis can focus on charting the individual differences between groups and use the deviation scores or GAPs as a metric of hygiene performance. Figure 3 and 4 provide examples of such implementations. Figure 3 charts data from a single organization’s users on their relative levels of awareness, knowledge, and technical capacity across the five SAFETY dimensions of cyber hygiene. Figure 4 tracks the relative impact of training levels on cyber hygiene across users in an organization where the CHI was implemented a month before and after training.

Figure 3. Application of the CHI to assess relative gaps in perceived awareness, knowledge, and capacity

Figure 4. Application of the CHI to assess training effects in an organization.

Advantages of the CHI approach

There is no single metric for cyber hygiene, nor is there any method that can achieve any of what the CHI delivers. The extant approaches to defining cyber hygiene and creating best practices — if organizations even engage in them—remains ad hoc, with most organizations adopting practice suggestions from industry groups and other sources. The CHI serves as a baseline for understanding and developing cyber hygiene practices within organizations. It also helps evaluate, develop, assess, track, and quantify cyber hygiene and ensure improvements over time.

The same is the case with the measurement of hygiene. Most organizations do not even measure user cyber hygiene; others use proprietary approaches with underlying algorithms that remain unknown and difficult for others to use or assess. This is the case with the U.S. Department of Homeland Security’s Continuous Diagnostic and Monitoring (CDM) program, which gives participating federal government organizations a cyber risk and hygiene score card. Their reason for the lack of transparency in the program’s scoring method is that it would end up in the hands of hackers.

That said, it is safe to say that at the user end, the only metrics that exist come from training and pen-testing. Both approaches, while appropriate, are wholly inadequate. Most use behavioral measures and fail to account for user cognition—wrongly presuming that user behavior is wholly premised on a priori thought. They also have unknown amounts of noise in the data stemming from the variance in the pen-test approaches to the specifics of the tests, its frequency, its reach, and its timing. This makes it impossible to use these metrics to compare different organizations, let alone rely on them to make judgments about an individual organization’s level of cyber readiness.

In contrast, the CHI provides a transparent approach, where organizations can use and even share their scores across the 20-items without fearing that it would expose the organization’s weaknesses to hackers. They could maintain internal records of additional items—such as specific technological safeguards and other practices—that the organization wishes to not reveal. The quantitative metric can also be used to establish a benchmark that could be improved upon when more data is shared across a sector. With more data from across the industry, industry benchmarks could be established overtime, providing a more robust standard for an organization in a sector.  Thus, the CHI provides an empirically driven, widely applicable, transparent, quantitative approach for formulating, benchmarking, and tracking user cyber hygiene within organizations.

Conclusive thoughts

The paper discussed why drawing parallels between personal hygiene and cyber is inappropriate, which might stymie the development of solutions and even increase overall user cyber risk. The paper then offered a different methodology and a mechanism for deriving cyber hygiene practice suggestions, one that is not prescriptive but instead empirically calibrated and contextually relevant. While the phrase cyber hygiene appears to have become part of the cybersecurity lexicon, we can still change how we conceptualize it. In the long run, security experts might even consider moving away from the term, replacing it with others such as Operational Security or “OPSEC,”, an area of practice developed by the US military which is more applicable in the security domain and can be applied with resorting to anolgoical leaps. OPSEC begins with the assumption that we are in an adversarial situation—a fact that is true in the domain of cybersecurity—and focuses on prioritizing information and developing approaches to ensure that those pieces of information stay protected. This shifts the focus away from global actions and their analogues in public health to tactical approaches that are grounded in adversarial defense. By re-conceptualizing how we think about cyber security, we can move away from broad practices to specific actions, and from dictating cyber hygiene practices to focus instead on protecting critical information—because after all, that is what the hackers are really after.

 

[1] The dependent probability is computed as (1-.90^k), where k is the number of layers of vulnerability.

[i] Burr, W. E., Dodson, D. F., & Polk, W. T. (2004). Electronic authentication guideline (NIST Special Publication 800-63 Version 1.0). Gaithersburg: National Institute of Standards and Technology.

 

[ii] Microsoft. (2016, August 31). Best practices for enforcing password policies. Retrieved from  https://docs.microsoft.com/en-us/previous-versions/technet-magazine/ff741764(v=msdn.10)?redirectedfrom=MSDN

 

[iii] Is there really any benefit to multivitamins? (n.d.). Johns Hopkins Medicine. Retrieved from https://www.hopkinsmedicine.org/health/wellness-and-prevention/is-there-really-any-benefit-to-multivitamins

Goodman, B. (2014, February 24). Healthy adults shouldn’t take vitamin E, Beta Carotene: Expert panel. MedicineNet. Retrieved from https://www.medicinenet.com/script/main/art.asp?articlekey=176905

 

[iv] Scholl, T. O., & Johnson, W. G. (2000). Folic acid: Influence on the outcome of pregnancy. The American Journal of Clinical Nutrition, 71(5), 1295S-1303S.

 

[v] Spiering, C. (2013, January 24). Janet Napolitano: Internet users need to practice good ‘cyber-hygiene’. Washington Examiner. Retrieved from https://www.washingtonexaminer.com/janet-napolitano-internet-users-need-to-practice-good-cyber-hygiene

 

[vi] Vishwanath, A. (2015, February 24). Before decrying the latest cyberbreach, consider your own cyberhygiene. The Conversation. Retrieved from https://theconversation.com/before-decrying-the-latest-cyberbreach-consider-your-own-cyberhygiene-37834

 

[vii] Cyber security is cyber health. (n.d.). H-X Tech. Retrieved from https://h-xtech.com/blog-healthcare-analogy

 

[viii] Early flying machines. (n.d.). Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Early_flying_machines

 

[ix] Hassold, C. (2017, November 2). Have we conditioned web users to be phished? PhishLabs. Retrieved from https://info.phishlabs.com/blog/have-we-conditioned-web-users-to-be-phished

 

[x] Anti-Phishing Working Group. (2019). Phishing activity trends report: 3rd Quarter 2019 [PDF document]. Retrieved from https://docs.apwg.org/reports/apwg_trends_report_q3_2019.pdf

 

[xi] Kim, Y., Daly, R., Kim, J., Fallin, C., Lee, J. H., Lee, D., Wilkerson, C., Lai, K., & Mutlu, O. (2016). Rowhammer: Reliability analysis and security implications. ArXiV, arXiv:1603.00747.

 

[xii] Jabr, F. (2017, December 18). How does the flu actually kill people? Scientific American. Retrieved from https://www.scientificamerican.com/article/how-does-the-flu-actually-kill-people/

 

[xiii] Caputo, D. D., Pfleeger, S. L., Freeman, J. D., & Johnson, M. E. (2013). Going spear phishing: Exploring embedded training and awareness. IEEE Security & Privacy12(1), 28-38.

 

[xiv] Vishwanath, A. (2018, September 1). Spear phishing has become even more dangerous. CNN. Retrieved from https://www.cnn.com/2018/09/01/opinions/spear-phishing-has-become-even-more-dangerous-opinion-vishwanath/index.html

 

[xv] Vishwanath, A., Neo, L. S., Goh, P., Lee, S., Khader, M., Ong, G., & Chin, J. (2020). Cyber hygiene: The concept, its measure, and its initial tests. Decision Support Systems128, 113160.

 

[xvi] Vishwanath, A., Neo, L. S., Goh, P., Lee, S., Khader, M., Ong, G., & Chin, J. (2020). Cyber hygiene: The concept, its measure, and its initial tests. Decision Support Systems128, 113160.

 

[xvii] Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science35(8), 982-1003.

 

December 2019, (c) Arun Vishwanath, PhD, MBA; Email: arun@arunvishwanath.us

Keywords: cyber hygiene, science of cyber security, human factors, OPSEC

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This month we learned that a US maritime base had to be taken offline for more than 30 hours because of a ransomware attack that interrupted cameras, doors, and critical monitoring systems. It’s not the first such attack, and it’s most definitely not the last.

Following it will be the usual drumbeat:  a call for more cyber hygiene. Cyber hygiene was last decade’s elixir for protecting against all cyber incidents, from Ring camera hacks to ransomware. It appeared in congressional testimonies, policy documents, and countless websites—16 million when I last searched.

But, judging from the continuing news of breaches and calls for more of it, it appears we never have enough of it. Or do we?

The answer to this question is actually hard to find. The reason being that no website tells you how much cyber hygiene you need, or whether you have enough.

Most begin by comparing cyber hygiene to personal hygiene—the cyber equivalent of washing your hands—to dole out some “always do this” advice—such as always use long, complex passwords (with uppercase and non-alphabetic characters) to ensure cyber safety. Herein lies the problem, and the reason why we haven’t achieved cyber hygiene yet.

The fact is that cyber hygiene is nothing like personal hygiene. Over the centuries, our bodies have  evolved outer and inner defenses, from hair and skin to white blood cells. This is why we can fend off all manner of germs, despite the fact that everyone from healthcare professionals to food service workers inadequately wash their hands.

In contrast, the components of computers are dumb circuits, many with flaws and without protections.  In 2018, we learnt of defects in every computer chip manufactured in the last two decades and there are many more vulnerabilities in the external sensory organs of computers (keyboard, camera, microphone), and in applications and operating systems.

Any of these can be compromised using malware deployed in spear phishing emails, and all it takes is a single inadvertent click on the email to cripple an entire corporation. Cyber hygiene doesn’t afford us the same room for error that personal hygiene does.

It gets worse: while there is little bad that can come from hand-washing, blindly following a cyber hygiene best practice can be harmful to your cyber health. For instance, many users are told to look for SSL icons (the green padlock) on their browsers next to a website’s name to assess its veracity, but aren’t told that many phishing websites – two out of three is a recent survey – also possess these icons.

Many such purported best practices are poorly developed, often without considering their real-world use environments. Such was the case with the 2004 NIST guideline advocating complex passwords, which was based on how easy it was for computers to crack them rather than how people remembered passwords. Users, constantly bombarded with password change requests, began reusing passwords and became accustomed to getting password change requests—something that hackers mimicked in spearphishing attacks.

The NIST guidelines were reversed in 2017, but by then countless compromises were likely caused by it. We cannot afford another decade of such missteps.

To begin, we have to stop espousing broad cyber hygiene best practice suggestions without testing their need and efficacy in real user environments.

Second, we need to move away from asking people to just do things that keep them safe to explaining why. Be it two-factor authentication or the application of software patches, every best practice has its limits and can be a conduit for compromise, and users must be informed of these.

Third, we need to reorient our fundamental view of cyber hygiene. One area that can serve as a model is Operational Security (OPSEC), a methodology developed by the US military during the Vietnam War to protect critical information from getting into the hands of the adversary. OPSEC helps users assess which information is critical based on what it could reveal, and then instructs users on ways to protect it.

Some of these principles are readily applicable in areas such as election security, where the US military is already training state and local officials. We can apply the same process for our cyber safety, moving away from following broad cyber hygiene guidelines to focused practices designed to protect critical personal information.

Finally, we must stop doling out cyber hygiene advice without measuring who needs it or how much of it they need. Recent user research has developed a Cyber Hygiene Inventory, a 20-question survey that measures different facets of user cyber hygiene and provides users with a 0–100 cyber hygiene score. The score can be used as a baseline to assess how much cyber hygiene users across an organization or even a region need and track how well they have progressed towards acquiring it.

If the last presidential elections taught us anything, it is that cyber security is intrinsically linked to the functioning of our democracy. In 2020, let’s resolve to stop asking for more cyber hygiene and start working towards everyone finally having it.

 

*A version of this post appeared here.

Cyber hygiene: the term that is evoked whenever there is a threat to our infrastructurea ransomware attack, or any data breach. It appears to be that elusive thing users never seem to have enough.

But how does one get this cyber hygiene? Better yet, do we even know what it means? Or how much of it we need?

I searched wide for the answer and was surprised to find no answer. In fact, I ended up with far more questions.

Because although the term appears thousands of times on various webpages, usually followed by some avowed best practice suggestions on what users should or shouldn’t  do online, none explain where these suggestions came from or whether doing what was suggested actually helps.

Besides, there exists no measurements for any of this. So how does one know they lack cyber hygiene? Or where they lack it? Or if they ever achieved it?

Cyber hygiene seems like that ever elusive elixir every security expert doles out: Everyone needs to have some of it, but no one can ever have it.

I am also to blame for some of this. In early 2015, in the aftermath of the infamous Sony Picture breach, I was searching for a term that could capture what users needed to do to prevent social engineering attacks. I wasn’t satisfied with terms like “human factors” because they signified a field of study–not what an user should be doing to help protect the enterprise from being breached.

My search led to a speech by Homeland Security Secretary Janet Napolitano who, almost two years earlier, had used the term in the context of developing better user habits. I thought it was perfect. I used it in my press piece and in media interviews. The term caught on.

On the one hand it achieved my goal–drawing attention to what users had to do, but on the other, it helped cloak the problem. Soon the lack of Cyber Hygiene became the catch-all term used to blame anyone who didn’t do something–usually something that was defined after a successful breach.

Feeling responsible, I set about developing a quantitative metric for measuring cyber hygiene. My goal was to define what we meant by user cyber hygiene (and what we didn’t), identify the underlying parts of it, and create a self-report questionnaire for measuring it–so we can we tell who has it, who lacks it, what they lack, and by how much.. Among those helping me were CISOs, technologists, graduate students, and team of top notch researchers from Singapore.

Over the course of a year and a half, I conducted a series of research studies beginning with interviews of CISOs, security experts, students, and industry professionals, followed by surveys of students, CISOs, employees of a federal government agency, and general Internet users. At each stage, the survey tool, which began at around 80-100 questions, was tested, refined, reduced, and retested. It was also put through various quantitative tests, from multi-dimensional scaling (MDS) and cluster analysis to confirmatory factor analysis and various validity checks.

The final outcome of all this was a 20-question Cyber Hygiene Inventory (CHI)© that quantitatively assesses user cyber hygiene across five dimensions. The dimensions, uncovered through the analytical approach, fit the acronym SAFETY. Here the S signifies Storage and Device Hygiene, A stands for Authentication and Credential, F for Facebook and Social Media, E for Email and Messaging, T for Transmission and Y–is the reference to You or the user.

The overall scale nets a possible CHI range of 0-100, with higher numbers indicating better cyber hygiene. The CHI score provides an instant snapshot of how much cyber hygiene each user possesses. Dig deeper and you get a breakdown of their cyber hygiene within each of the five categories, helping pinpoint where the user is lacking and where improvements are necessary. Furthermore, by comparing CHI across users or groups and you get to know exactly how well an employee or group is actually doing in their cyber hygiene levels relative to others in an organization (or across an entire region or sector).

The CHI has enormous potential–from providing quantitative insights into cyber hygiene levels to helping pinpoint what is lacking, where, and by how much. For organizations with a defined cyber risk assessment program (such as those implementing the NIST Cybersecurity Framework), the CHI helps develop a more accurate user risk profile, so they can better align their resources and implement pointed interventions that improve their overall risk posture. For other organizations, the CHI provides a benchmark understanding of where they stand–a first step towards developing a user risk profile.

Now rather than blaming everyone and asking them to get cyber hygiene, or worse yet, saying cyber hygiene has been achieved because someone passed a phishing penetration test, we can know exactly how much cyber hygiene users actually possess and what they need to work on–so as to improve their own and the organization’s overall cyber resilience.

You can read more about the CHI by clicking here: LINK

© Arun Vishwanath, 2019

*A version of this post appeared here.

Cyberwarfare suddenly went public late last month.

Multiple media outlets reported that President Trump had authorized U.S. Cyber Command to conduct a cyberstrike on Iran. Obviously, this isn’t the first such attack by a nation, or even by the United States, on another — the Russians, Chinese and North Koreans have their digital fingerprints on all manner of attacks here, and the U.S. government recently reportedly conducted retaliatory attacks on Russia’s Internet Research Agency for misinformation campaigns during the 2016 presidential election.

And, Iran, unarguably, makes for a deserving target: Iranian hackers were behind the 2016 incursion on the Bowman Avenue dam in New York and the massive ransomware attack that in March 2018 crippled all of Atlanta’s city government systems, and they are likely behind ransomware attacks on city government systems in Greenville, N.C., and Baltimore.

But this attack heralds a new age of Internet warfare — a likely outcome of the elevated role of U.S. Cyber Command under National Security Adviser John Bolton, who has been hinting at such a cyber offensive for a while — and is a harbinger of much more to come.

Though many previous attacks — such as the now well-known 2010 Stuxnet malware purportedly developed by U.S. and Israeli intelligence and used to damage systems controlling Iran’s fledgling nuclear program — have been widely reported on as acts of espionage, they were only accidentally discovered by security companies, never confirmed by either nation.

In contrast, this time multiple administration officials, albeit unofficially, confirmed the strike, after key White House officials such as Bolton have openly espoused the need for offensive cyberattacks, setting the stage for such actions.

So if the United States did launch this attack — and all indicators, including Iran’s telecom minister claiming that the attacks occurred but were unsuccessful, suggest that is the case — then this is a paradigm shift in the use of the Internet as an instrument of war, with likely significant consequences.

For one thing, the United States has more targets than most nations — targets that could be subject to retaliation for an attack that the government admits to carrying out. Compared to many other nations, especially adversaries such as Iran, the U.S. has more computers, more mobile and connected devices, more websites and more infrastructure that is reliant on the Internet. We also have more users going online for all manner of actives ranging from everyday communications to commercial transactions, health care management, and government operations. Much of this is exposed and vulnerable. For instance, reports from the Government Accountability Office point to thousands of vulnerabilities that remain in federal government systems, and there are many more unaccounted-for weaknesses in various state, local and corporate systems throughout the nation, which we often only learn about after a major breach.

Social-engineering attacks — phishing via email, social media, mobile and messaging —that target users directly continue to grow in intensity and sophistication. Not only is U.S. exposure to such attacks significantly greater, because we have many more users, but we also not found an effective defense against them.

Another problem is that the attack tools developed by our intelligence agencies tend to become sought-after targets for other nations that don’t have the technical depth to develop their own. This has been the case with past tools, such as Eternal Blue, developed by the National Security Agency, which was stolen and leaked by a hacker group and subsequently used by North Korean hackers to create WannaCry — the massive ransomware attack that crippled millions of computers in more than 150 nations in a matter of hours. That desire to match U.S. capabilities will only be worse after an officially confirmed attack.

After an incident like this one is made public, nations often become increasingly paranoid and engage in riskier actions to protect against attacks. For instance, shortly after the SEALs killed Bin Laden in Pakistan, their military began hiding their nuclear arsenal in unguarded delivery vans in congested civilian areas, all in an attempt to avoid being detected by our intelligence agencies. If Iran fears another cyberattack, it could simply stop using computing technology in critical areas such as protecting covert nuclear equipment, which could significantly jeopardize their safety and our ability to effectively monitor them.

Even without open cyberattacks, the United States already tends to be a convenient scapegoat for adversarial regimes wanting to distract attention away from their shortcomings. For instance, recently Venezuela’s embattled president Nicolás Maduro blamed a five-day nationwide power blackout caused by a woefully underfunded electric grid on American cyberwarfare.

Open cyberwarfare will also have a chilling effect on the continued development and use of the Internet. Already, some nations are refusing to deploy technologies developed by certain nations, while some others are attempting to develop their own software, operating systems and networks. This attack could also draw investment away from developing consumer technologies to designing cyber weapons, which will lead to a virtual arms race, with nations creating proprietary computing systems, forming closed communication networks and alliances — in essence, forming a Digital Iron Curtain.

Before things get that carried away, the world should agree that the Internet should not be used as a battlefield.

This may sound pacifistic, even far-fetched. But email, social media, search engines, even messaging platforms work better when more people use and contribute to them. As the Internet’s use worldwide has increased, so have the fortunes of the American public — who have helmed many of the virtual businesses and products that have shaped the 21st century.

The Internet is far too important to pull into warfare — not just for billions of people all over the world, but especially for Americans. The potential dangers of allowing open cyberwarfare are already clear enough. Nations shouldn’t wait until future attacks make them even clearer before they act.

 

*A version of this post appeared as an op-ed in the Washington Post and other publications. 

Research points to users being significantly more susceptible to social attacks they receive on mobile devices. This is the case for email-based spear phishing, spoofing attacks that attempt to mimic legitimate webpages, as well as attacks via social media. [1], [2], [3]

The reasons for this stem from the design of mobile and how users interact with these devices. In hardware terms, mobile devices have relatively limited screen sizes that restrict what can be accessed and viewed clearly. Most smartphones also limit the ability to view multiple pages side-by-side, and navigating pages and apps necessities toggling between them– all of which make it tedious for users to check the veracity of emails and requests while on mobile. 

Mobile OS and apps also restrict the availability of information often necessary for verifying whether an email or webpage is fraudulent. For instance, many mobile browsers limit users’ ability to assess the quality of a website’s SSL certificate. Likewise, many mobile email apps also limit what aspects of the email header is visible and whether the email-source information is even accessible. Mobile software also enhances the prominence of GUI elements that foster action–accept, reply, send, like, and such– which make it easier for users to respond to a request. Thus, on the one hand, the hardware and software on mobile devices restrict the quality of information that is available while on the other they make it easier for users to make snap decisions.

The final nail is driven by how people use mobile devices. Users often interact with their mobile devices while walking, talking, driving, and doing all manner of other activities that interfere with their ability to pay careful attention to incoming information. While already cognitively constrained, on screen notifications that allow users to respond to incoming requests, often without even having to navigate back to the application from which the request emanates, further enhance the likelihood of reactively responding to requests.

Thus, the confluence of design and how users interact with mobile devices make it easier for users to make snap, often uninformed decisions–which significantly increases their susceptibility to social attacks on mobile devices. 


[1] Vishwanath, A. (2016). Mobile device affordance: Explicating how smartphones influence the outcome of phishing attacks. Computers in Human Behavior, 63, 198-207.

[2] Vishwanath, A. (2017). Getting phished on social media. Decision Support Systems, 103, 70-81.

[3] Vishwanath, A., Harrison, B., & Ng, Y. J. (2018). Suspicion, cognition, and automaticity model of phishing susceptibility. Communication Research, 45(8), 1146-1166.

The first step in conducting online propaganda efforts and misinformation campaigns is almost always a fake social media profile. Phony profiles for nonexistent people worm their way into the social networks of real people, where they can spread their falsehoods. But neither social media companies nor technological innovations offer reliable ways to identify and remove social media profiles that don’t represent actual authentic people.

It might sound positive that over six months in late 2017 and early 2018, Facebook detected and suspended some 1.3 billion fake accounts. But an estimated 3 to 4 percent of accounts that remain, or approximately 66 million to 88 million profiles, are also fake but haven’t yet been detected. Likewise, estimates are that 9 to 15 percent of Twitter ‘s 336 million accounts are fake.

Fake profiles aren’t just on Facebook and Twitter, and they’re not only targeting people in the U.S. In December 2017, but German intelligence officials also warned that Chinese agents using fake LinkedIn profiles were targeting more than 10,000 German government employees. And in mid-August, the Israeli military reported that Hamas was using fake profiles on Facebook, Instagram and WhatsApp to entrap Israeli soldiers into downloading malicious software.

Although social media companies have begun hiring more people and using artificial intelligence to detect fake profiles, that won’t be enough to review every profile in time to stop their misuse. As my research explores, the problem isn’t actually that people – and algorithms – create fake profiles online. What’s really wrong is that other people fall for them.

My research into why so many users have trouble spotting fake profiles has identified some ways people could get better at identifying phony accounts – and highlights some places technology companies could help.

People fall for fake profiles

To understand social media users’ thought processes, I created fake profiles on Facebook and sent out friend requests to 141 students in a large university. Each of the fake profiles varied in some way – such as having many or few fake friends, or whether there was a profile photo. The idea was to figure out whether one or another type of profile was most successful in getting accepted as a connection by real users – and then surveying the hoodwinked people to find out how it happened.

I found that only 30 percent of the targeted people rejected the request from a fake person. When surveyed two weeks later, 52 percent of users were still considering approving the request. Nearly one in five – 18 percent – had accepted the request right away. Of those who accepted it, 15 percent had responded to inquiries from the fake profile with personal information such as their home address, their student identification number, and their availability for a part-time internship. Another 40 percent of them were considering revealing private data.

But why?

When I interviewed the real people my fake profiles had targeted, the most important thing I found was that users fundamentally believe there is a person behind each profile. People told me they had thought the profile belonged to someone they knew, or possibly someone a friend knew. Not one person ever suspected the profile was a complete fabrication, expressly created to deceive them. Mistakenly thinking each friend request has come from a real person may cause people to accept friend requests simply to be polite and not hurt someone else’s feelings – even if they’re not sure they know the person.

In addition, almost all social media users decide whether to accept a connection based on a few key elements in the requester’s profile – chiefly how many friends the person has and how many mutual connections there are. I found that people who already have many connections are even less discerning, approving almost every request that comes in. So even a brand-new profile nets some victims. And with every new connection, the fake profile appears more realistic and has more mutual friends with others. This cascade of victims is how fake profiles acquire legitimacy and become widespread.

The spread can be fast because most social media sites are designed to keep users coming back, habitually checking notifications and responding immediately to connection requests. That tendency is even more pronounced on smartphones – which may explain why users accessing social media on smartphones are significantly more likely to accept fake profile requests than desktop or laptop computer users.

Illusions of safety

And users may think they’re safer than they actually are, wrongly assuming that a platform’s privacy settings will protect them from fake profiles. For instance, many users told me they believe that Facebook’s controls for granting differing access to friends versus others also protect them from fakers. Likewise, many LinkedIn users also told me they believe that because they post only professional information, the potential consequences for accepting rogue connections on it are limited.

But that’s a flawed assumption: Hackers can use any information gleaned from any platform. For instance, simply knowing on LinkedIn that someone is working at some business helps them craft emails to the person or others at the company. Furthermore, users who carelessly accept requests assuming their privacy controls protect them imperil other connections who haven’t set their controls as high.

Seeking solutions

Using social media safely means learning how to spot fake profiles and use privacy settings properly. There are numerous online sources for advice – including platforms’ own help pages. But too often it’s left to users to inform themselves, usually after they’ve already become victims of a social media scam – which always begins with accepting a fake request.

Adults should learn – and teach children – how to examine connection requests carefully in order to protect their devices, profiles and posts from prying eyes, and themselves from being maliciously manipulated. That includes reviewing connection requests during distraction-free periods of the day and using a computer rather than a smartphone to check out potential connections. It also involves identifying which of their actual friends tend to accept almost every friend request from anyone, making them weak links in the social network.

These are places social media platform companies can help. They’re already creating mechanisms to track app usage and to pause notifications, helping people avoid being inundated or needing to constantly react. That’s a good start – but they could do more.

For instance, social media sites could show users indicators of how many of their connections are inactive for long periods, helping people purge their friend networks from time to time. They could also show which connections have suddenly acquired large numbers of friends, and which ones accept unusually high percentages of friend requests.

Social media companies need to do more to help users identify and report potentially fake profiles, augmenting their own staff and automated efforts. Social media sites also need to communicate with each other. Many fake profiles are reused across different social networks. But if Facebook blocks a faker, Twitter may not. When one site blocks a profile, it should send key information – such as the profile’s name and email address – to other platforms so they can investigate and potentially block the fraud there too.

[A version of this article appeared on The Conversation http://theconversation.com/why-do-so-many-people-fall-for-fake-profiles-online-102754.]

The continued prosecution of “All the President’s Men” does little to stop the Russians from attempting to influence America’s upcoming midterm elections. And reports from Missourito Californiasuggest they are already looking for our cyber weaknesses to exploit.

Chief among these: spear phishing—emails containing hyperlinks to fake websites—that the Russians used to hack into the DNC emails and set in motion their 2016 influence campaign.

After two years of congressional hearings, indictments, and investigations, spear phishing not only continues to be the commonest attack used by hackers, but the Russians are still trying to use it against us.

The is because in the ensuing time, spear phishing has become even more virulent, thanks to the availability of sophisticated malware, some stolen from intelligence agencies; troves of people’s personal information from previous breaches; and ongoing developments in machine learning that can deep-dive into this data and craft highly effective attacks.

Just last week, Microsoft blocked six fake websitesthat were likely to be used for spear phishing the US Senate by the same Russian intelligence unit responsible for the 2016 DNC hack

But the Internet is vast and there are many more fundamental weaknesses still available for exploit.

Take the URLs with which we identify websites. Thanks to Internationalized Domain Names (IDNs)that allow websites to be registered in languages other than English, many fake websites used for spear phishing are registered using homoglyphs— characters from languages that look like English language characters. For instance, a fake domain for Amazon.com could be registered by replacing the English “a” or “o” with their Cyrillic equivalents. Such URLs are hard for people to discern visually and even email scanning programs, trained to flag words like “password” which are common in phishing emails, like the one the Russians in 2016 used to hack into Jon Podesta’s emails, can be tricked. And while many browsers prevent URLs with homoglyphs from being displayed, some like Firefox still expect users to alter their browser settings for protection.

Making things worse is the proliferation of Certification Authorities (CA), the organizations issuing digital certificates that make the lock icon and HTTPS appear next to a website’s name on browsers. While users are taught to trust these symbols, an estimated one in four phishing websites actually have HTTPS certificates. This is because some CA’s have been hacked, meaning there are many roguecertificates out there, while some others have doled out free certificates to just about anyone. For instance, one CA last year issued certificates to15000 websites with names containing some combination of the word PayPal—all for spear phishing.

Besides these, the problem of phony social media profiles, which the Russians used in 2016 for phishing, trolling and spreading fake news, remains intractable. Just last week, the Israel Defense Forces (IDF) reported a social media phishing campaign by Hamas, luring its troops to download malware using fake social media profiles on Facebook, Instagram, and Whatsapp. Also last week, Facebook, followed by Twitter, blocked profiles linked to Iranian and Russian operatives being used for spreading misinformation.

These attacks, however, reveal a critical weakness of influence campaigns: by design, they utilize overlapping profiles in multiple platforms. Yet, today, social media organizations internally police their networks and keep information in their own “walled gardens.”

A better solution would be to therefore host data on suspect profiles and pages in a unified, open-source repository, one that accepts inputs from other media organizations, security organizations, even users who find things awry. Such an approach would help detect and track coordinated social media influence campaigns—which would be of enormous value to law enforcement and even media organizations big and small, many of which get targeted using the same profiles.

A platform for this could be the Certificate Transparencyframework, where digital certificates are openly logged and verified, which has been adopted by many popular browsers and operating systems. For now, this framework only audits digital certificates but, it could be expanded to encompass domain name auditing and social media pages.

Finally, we must improve user education. Most users know little about homoglyphs and even less about how to change their browser settings to ensure against them. Furthermore, many users, after being repeatedly trained to look for HTTPS icons on websites, have come to implicitly trust them. Many even mistake such symbols to mean that a website is legitimate. Because even an encrypted site could be fraudulent, users have to be taught to be cautious, and to assess website factors ranging from the spelling used in the domain name, to the quality of information on the website, to its digital certificate and the CA who issued it. Such initiatives must be complemented with better, more uniform Internet browser design, so users do not have to tinker with settings to ensure against being phished. 

Achieving all this requires leadership, but the White House, which ordinarily would be best positioned to address them, recently fired its cybersecurity czar and eliminated the role. And when according to GAO, federal agencies have yet to address over a third of its 3000 cybersecurity recommendations, the President instead talks about developing a Space Force. Last we knew the Martians haven’t landed, but the Russians sure are probing our computer systems.

 

*A version of this post was published in CNN: https://www.cnn.com/2018/09/01/opinions/spear-phishing-has-become-even-more-dangerous-opinion-vishwanath/index.html

In late 2014, in the aftermath of the Sony Pictures Entertainment breach, I had advocated the development of a cyber breach reporting portal where individuals could report suspected cyber incidents. Such a system, I argued, would work as an early warning system so IT could be made aware of an attack before it become widespread; it would also work as a centralized system for remediation, so affected victims could seek help.

Since then many organizations all over the world have developed such portals for their employees to report suspected breaches. These range from web reporting forms and email in-boxes to 24-hour help-desks where employees can find remedial support.

While there is little direct research on how well these portals work, extant reports points to a rather low utilization rate. For instance, Verizon’s 2018 Data Breach Investigations Report (DBIR) found that among 10,000 employees across different organizations who were subjects of various test phishing campaigns, fewer than 17% reported it to IT. My own experience advising firms on their user vulnerability reduction initiatives have found similar low reporting rates.

To counter this, many CSOs have resorted to incentives and punishments to enhance employee reporting of suspect emails and cyber activities. But the question—one that I am often posed when advising organizations on IT security—is which of these really work?

First, let’s begin with punishments. We know from a century of research on human motivation that punishments tend be salient but not necessarily effective in motivating people the right way. That means people remember threats but it doesn’t help, especially when the task at hand requires mental effort.

For instance, when the former head of the NSA Admiral Rogers famously remarked that individuals who fall for a phishing test should be court-martialed— it sure got noticed and widely reported. But such actions lead to fear, anxiety, and worry, not more thoughtful action. This is precisely why phishing emails have warnings and threats in them—because when people focus on the threats, they end up ignoring the rest of the information on the email that could reveal the deception.

In surveys I have conducted in organizations that use punishments to foster reporting, the vast majority of users reported changing how they use email: many were avoiding opening email at certain times of the day, were waiting for people to resend their original email requests, or, in some cases, forwarding work emails to their non-IT authorized mobile devices and email accounts.

These may be effective ways of avoiding getting caught in a phishing test, but not necessary good for organizational productivity and cybersecurity.

On the flip side are rewards for reporting phishing emails. Some organizations have used monetary compensations, others have experimented with token rewards, and some others with mere recognition of the employee who reported. Which from these work the best? The surprising answer: recognition.

The reasons for it are as follows. First, monetary compensation puts a dollar amount to cybercrime reporting—a value that is difficult to determine. That is, do we estimate the value of a report based on the time the employee sent in the report, i.e., immediately after the attack versus much later, the accuracy of their report, or the size of the breach it prevented? Each estimation process has its own pitfalls and they all also focus on the report rather than the employee doing the report or what it means for them to actually perform the reporting function.

Monetary incentives have another problem: they turn reporting into a game. This changes the employee’s motivation, who rather than becoming more careful about scrutinizing incoming emails, which is the indirect purpose of such reporting, learn that more reporting increases their chances of winning a prize.

Consequently, many employees report anything they find troubling, sometimes emails even they know are simply spam. This, on the one hand, significantly increases the load on the IT helpdesks and decreases their chances of catching a real phishing email. On the other hand, too many unnecessary reports decrease the odds of winning a reward, which over time reduces the employees’ motivation for reporting.

Compared to this, social rewards such as public praise, recognition and appreciation through announcements acknowledging those users who have reported suspicious emails, along with appropriate communication, shows the value of this reporting works better than all other approaches.

This is because monetary incentives appeals to employees’ base needs, which are already met through their jobs, while social recognition appeals to higher order needs—what the famous motivational psychologist Abraham Maslow termed “esteem needs”: the human need for achievement, for respect, for prestige, and for a sense of accomplishment.

Being publicly recognized for reporting suspect emails makes employees feel valued for their effort at reporting, which on the face of it is an act of altruism that has little direct relationship to their workflow or productivity. Effectively communicating the value of their reporting, thus, focuses attention to the employees doing the report.

This has enduring effects, influencing both the employee being feted while also motivating others to follow their lead, which altogether leads to a culture of cyber safety within the organization.

As email-based attacks targeting organizations become more sophisticated, employees are the first, and at times the only, line of defense against them. Effectively harnessing the power of employees through the use of appropriate strategies for incentivizing reporting is the difference between organizations that are reacting to cyber-attacks and those that are proactively stopping them.

 

* A version of this post appeared in InfoSecurity Magazine

In the not-so-distant future, we will be presented with the version of the news we wish to read — not the news that some reporter, columnist or editorial board decides we need to read. And it will be entirely written by artificial intelligence (AI).

Think this is science fiction? Think again. Many of us probably don’t realize that AI programs were authoring many parts of the summer Olympics coverage, and also providing readers with up-to-date reports, personalized based on the reader’s location, on nearly 500 House, Senate, and gubernatorial races during the last election cycle.

And those news feeds on Facebook and Google News that the majority of people trust more than the original news sources, well those, too, employ machine-learning algorithms to match us with news and ads. And we saw how easily those were co-opted by the Russians to influence our last presidential elections.

Follow the natural progression of these developments, and it leads to an ominous future in which AI entirely writes and presents the news exactly the way each of us would like to read it — forever altering democracy as we know it.

In this future, journalists might still report on events, but it will be AI that will take these inputs, inject data from its vast historical repositories and formulate a multitude of different themes, each making different arguments and coming to different conclusions. Then, using data about readers’ interests learned from their social media, online shopping and browsing history, AI will present them with the version of the news they would like to read.

For example, for a reader with strong views on the environment, news of heavy flooding in some place of interest might be presented from a global warming standpoint, with conclusions about how human activity has hurt the environment. For another with views against climate change, the same story might be presented with data and conclusions questioning the validity of weather science.

Stories might be presented in brief, for readers who like to peruse the news, or in-depth, for those who like to delve into details. It may even have actionable links to online stores selling essential supplies for those in the flood zone or social media links connecting readers with others who share their interests. In essence, it will be the perfect AI-created echo chamber — where each person will be an audience of one, connected to others who are always agreeable.

This hyper personalized, AI-driven reality is closer than people realize — and it goes beyond the Olympic or election coverage I mentioned. After his purchase of the Washington Post, Jeff Bezos introduced Heliograf, an AI-based writing tool, which given predefined themes and phrases can write complete articles. This software, while still far from autonomous, has already authored about 850 articles that have cumulatively garnered half a million page views.

Others like The New York Times, the Associated Press and many financial organizations are also testing and utilizing similar software for everything from local news reporting to financial report writing. Just consider this AP story on a Maryland-based company’s third-quarter results, written by AI.

Furthermore, thanks to Google, Facebook, Amazon and other online services tracking virtually every aspect of people’s online and even offline behaviors, we already have deep data on almost every American’s personal opinions and preferences — which these companies already use to target and position advertisements. All that’s missing is for one media organization to combine these processes.

And there is nothing to stop a company, especially one such as Amazon or even Apple, from doing it. After all, it would create the perfectly “sticky website,” where people, content and products are precisely matched — an advertiser’s dream come true.

Besides, there is no policy or law that prohibits any of this — none whatsoever prescribing that the news must be authored by people. And news consumers would love such personalized news. After all, close to the majority of news consumers, both right- and left-leaning, not only prefer to hear political views in line with their own thinking on social media, but they also tend to block or defriend people who disagree with their avowed political views.

The majority of news consumers also “happen upon the news” online rather passively, often while doing something else. They usually follow the same few news sources rather than looking for another source to reconfirm what they are presented, let alone get a different perspective.

So the audience preference for an AI-driven, single news website that targets them with hyper-personalized content is already here, policies prohibiting it are absent and the technology for it is almost ready. In other words, this media future is primed for disruption.

A win-win for marketers, advertisers and readers — but a giant loss for democracy as we know it, because it will take away the core of what makes democracies successful: well-informed citizens, who form opinions not by simply reading articles they agree with, but by examining that which they don’t agree with — and then finding common ground.

However, we can save this critical part of our democracy through forward thinking policy, media self-policing and a bit of introspection.

More specifically, first, when it comes communication technology, policy making tends to be highly reactive. Right from the days of the Radio Act of 1912, which was a reaction to the sinking of the Titanic and eventually led to the creation of the Federal Communications Commission, to all the many congressional hearings after the Russian interference in our elections, we have reactively dealt with the media. What we need instead is to proactively address what we know is more than likely.

The problem with AI is not only that it will do things faster or better than human journalists, but it is also that we will trust it implicitly. We already see this trend with court systems across the nation using AI-based programs for deciding what punishment is meted out to people convicted of crimes without fully examining the underlying computational algorithms governing the programs.

Likewise, the AI-generated news of the future will likely be considered more trustworthy, unless policies are enacted that limit the extent to which algorithms can access audience profile data — thereby reducing the ability for the media to target each reader with their own version of “alternative facts.”

Second, the news media needs to act responsibly and self-police. With the many articles already being generated and matched to readers by AI, news sites need to start providing indicators of how such content matching was done, what parts of the content was authored by AI and, in the future, how many different versions of the story were created. This would help readers make up their own minds about the credibility of what they read.

Finally, the reading public has the largest responsibility. What our recent presidential election has taught us is that it’s not simply the availability of the media, the presence of competing content or even its accessibility. It is human agency. In other words, we the people have to actively seek information — some that is agreeable, a lot that it not; some that’s online, and others that come from discussions with people who disagree with us — and form our informed views. And that’s something tomorrow’s AI could well take away from us.

 

 

*A version of this post appeared in CNN

Amazon Go, the online retailer’s first completely automated store, debuted in Seattle last week. Using a bevy of smart cameras, deep machine learning and artificial intelligence (AI) algorithms, the store makes it possible for shoppers to simply pick up the products they like and go, with their accounts being automatically charged for the products — completely eliminating the need for cashiers and checkout lines. Though staff members still stock the shelves, they too will likely soon be replaced by robots.

This is revolutionary and will likely be how all stores will operate in the near future. Stores won’t have to invest in employees — salaries, training, overtime, health care. Customers will like it, too. No more standing in boring check-out lines, interacting with indifferent staff.

What we are witnessing is surely the future of the retail industry, but there is also a downside that needs our attention. Cashiers and retail workers are two of the most common occupations in the US, employing roughly 8 million people, many of who tend to be younger, white women, making modest yearly incomes in the $20,000-$25,000 range.

Most of these jobs require little formal education for entry, and so the sector supports many individuals with relatively low skills and education who are likely to find it particularly hard to quickly retool and fit a different employment sector. Most of them will likely find themselves jobless.

Of course, this isn’t the only sector that AI will decimate. Driverless trucks are already being tested on major highways. They, too, have many advantages over today’s long haulers: they can run 24/7 and never get fatigued; no need for mandatory breaks; no more wasted fuel idling overnight.

Truck drivers account for a third of the cost of this $700 billion industry, and there are over 1 million mostly middle-aged, white male truckers in the US. Their jobs will be rendered obsolete. And these numbers will likely be even higher once driverless cars replace all taxi and local delivery drivers.

Such fears of computing-led obsolescence aren’t new. In 1964, less than a few years after IBM had launched the first solid-state mainframe computer, “The Twilight Zone” ran a skit titled “The Brain Center at Whipple’s” — where Mr. Whipple, the owner of a vast manufacturing corporation replaced all his factory workers with a room-sized computing machine.

Mr. Whipple’s economic justification for his “X109B14 modified transistorized totally automated machine” could just as well be applied to AI: “It costs 2 cents an hour to run … it lasts indefinitely … it gets no wrinkles, no arthritis, no blocked arteries … two of them replace 114 men who take no coffee breaks, no sick leaves, no vacations with pay.” In the show, Whipple’s machine quickly replaced everyone from the plant’s workers to its foremen to all the secretaries.

The story was prescient and many of its fictionalized fears in time came true: Most of the large manufacturing plants were indeed shut down; secretaries and typists mostly became obsolete; and the jobs that created the American middle-class were all eventually outsourced. Much of this computer-driven automation replaced low-skilled easily routinizable functions.

But AI is different. It utilizes deep-learning algorithms and acquires skills, so it can routinize many complex functions.

Take journalism — a task that has always been performed by humans. After its purchase of The Washington Post last year, Amazon tested Heliograf, a new AI based writing program that automates report-writing using predefined narrative templates and phrases. From the Olympics to the elections, the software has already auto-published close to 1,000 articles.

And given its ability to churn through virtually any amount of data and spit out endless reports instantaneously, AI newsbots are way better than humans. It’s no surprise then that USA Today, Reuters, BuzzFeed and growing numbers of financial organizations are already employing AI for tasks ranging from reporting to data authentication.

In the near future, AI will replace many other such so-called highly skilled professions, from chefs to pilots and surgeons. Going back to school, learning new skills and retooling might not be an option because it would be impossible to learn as quickly, provide the kind the nuance from distilling terabytes of information or outpace AI. And besides, by the human-time it takes to acquire a new skill, AI might have learned to replace it.

If these trends materialize — and some might not — we are looking at a seismic shift in the American economy. If the last election was a push back against globalization, imagine what a rage against AI will look like.

The solution, of course, is not to stop the march of progress but to prepare for it with forward thinking investments in education, human capital and public policy. While Washington is busy cleaning up yesterday’s self-inflicted mess, this is tomorrow’s crisis that requires attention today.

In the end of the Mr. Whipple skit, he, too, was rendered obsolete — by a robot. Rod Serling’s ominous closing message: “Man becomes clever instead of becoming wise; he becomes inventive and not thoughtful; and sometimes, as in the case of Mr. Whipple, he can create himself right out of existence.” One hopes that this isn’t what AI does to us.

 

*A version of this post appeared in CNN