Cybersecurity Musings

Recent Posts

Imagine the shock of receiving communication from a hacker saying that your child’s most sensitive information — from passports and birth certificates to profile pictures and classroom locations — will be exposed on the Internet unless their school administrators pay a ransom. 

This horrifying situation recently occurred in Nevada’s Clark County School District (CCSD), the nation’s fifth-largest school district, serving 300,000 students. It’s an ongoing nightmare that leaves parents in the district, which suffered a breach two years ago, more informed by hackers than by school officials, who appear less transparent about what’s happening.

While CCSD’s plight is unique, its reactions are distressingly familiar. Across the country, schools have become prime targets. In 2022, 1,436 separate schools and colleges fell victim to cyberattacks, which affected over a million students.  Education is among the most targeted sectors and has the highest rates of ransom payment. 

Why Schools Are Frequent Cybercrime Victims

The reasons behind the education sector’s vulnerability to attacks are fourfold:

  1. Aging IT infrastructure and lower cybersecurity expertise among staff makes schools attractive targets for exploitation by cybercriminals. 

  2. Organizations subject to breach notification mandates often turn to law firms that prioritize limiting liability over open communication. The loss of transparency leads to generic and often cryptic notifications that provide little useful information and merely offer generic credit monitoring services as the salve for all breaches.

  3. Hackers’ tactics have evolved. The widespread availability of advanced AI programs has made creating deepfakes, executing social engineering attacks, and impersonating individuals alarmingly easy. Credit monitoring, which focuses on financial data, is ill-equipped to protect against these emerging threats.

  4. Children are more immersed in technology than ever before, yet have limited engagement with cybersecurity. These digital natives encounter technology at a young age, often without a full understanding of the risks they face. They often seek help from peers who share their limited knowledge, inadvertently compounding the cybersecurity risk for their entire cohort. 

How to Fix the Issues

There is an urgent need to fix these problems because the security of all our children is at stake. Surely the most technologically advanced nation in the world can do better to protect its most precious asset: personal information of future generations.

Fix Teacher Shortages

Today’s teachers face a stark reality: Even after years at the job, they earn half the salary of cybersecurity professionals, while working equally demanding hours and often with additional administrative responsibilities. The COVID-related workflow shortages have forced many teachers to shoulder more significant workloads, diminishing the profession’s prestige and deterring new graduates from entering the field. To revitalize the field, teacher salaries should be pegged to real-world standards with commensurate scope for career advancement and mobility. Ensuring the security of our children’s data and fortifying their digital future necessitates a comprehensive approach encompassing technological enhancements, as well as competitive compensation and strategic recruitment for teachers.

Reform Credit Monitoring

Second, we must reform the credit-monitoring system. Presently, it operates as a fee-based service, with individuals receiving it for however long breached companies agree to pay for it. Cost-free universal credit monitoring and ID protection should be available across the lifespan, regardless whether someone was breached. This would greatly enhance the quality of overall credit-monitoring data (before it is corrupted by cybercrime), and parents wouldn’t need to lock their children’s credit or worry about losing protection when their paid service expires. This approach safeguards Americans’ creditworthiness, preserves the financial system’s integrity, and promotes economic growth.

It’s time to transcend the limited, fee-based models and establish a universally accessible system that shields Americans from evolving cyber threats and identity theft, making a lasting investment in our nation’s financial stability. A comprehensive, universally accessible system helps protect every American’s credit from the ever-evolving threats of cyberattacks and identity theft. This investment in our collective financial well-being will yield immeasurable benefits for generations to come.

Train Students

Lastly, we must prioritize cyber-hygiene training in K-12 students. The White House’s National Cyber Workforce and Education strategy underscores the urgency of cyber education in children’s formative years. While states like New York have taken steps by introducing computer science and data fluency standards, these initiatives fall short. The current goal of digital proficiency is akin to teaching children to not start fires. We need to go beyond and equip children with the skills to extinguish fires. This demands comprehensive cyber-hygiene training — educating children on protecting their data during transmission, safeguarding their online identities, and effectively responding and mitigating attacks.

It’s not sufficient for children to comprehend data co-option; they must grasp the potential exploitation of co-opted data. A comprehensive K-12 cyber-hygiene program imparts the knowledge, tech-savvy media habits, and deception-recognition skills required to prepare future generations.

Failing to Act Is Failing Our Children

Because of the never-ending news about wars and natural calamities, cyberattacks rarely make headlines today. We may avoid discussing it and allow lawyers to craft inscrutable messages to shield organizations from breach liability. But what we are really doing is selling our children’s future. It’s essential to recognize that our actions today affect the futures of our children. The time to act is now.

*A version of this post appeared in DarkReading. 

Degrees of separation can tell you how likely you are to being hacked. The degrees separating you can reveal your risk of getting hacked. Take this free 8-question quiz and find out how many degrees separate you from a hacker: https://0oxloyflc3p.typeform.com/to/mTc2sV8Q. The answer will instantly reveal your likelihood of being hacked.

Credit: Scott Garfield

Degrees imply steps — as in the number of other people you know, who connect you to others they know, to ultimately contact that actor. The fewer the degrees, the closer you are.

And because the world of people is finite and people are interconnected, most of us are connected — even to total strangers — by six or fewer degrees. For that reason, it’s called six degrees of separation.

The basis of this is research from the 1960’s, from Stanley Milgram’s work on Small Worlds. There is even an online game, where you can locate the degrees separating people from the actor Kevin Bacon.

All this was before social media demystified the lives of the rich and famous. Now many of us would rather not be close to someone in Hollywood.

But there is another group we’d rather not be close to: Hackers.

You see the same degrees separating us from each other, also separates us from hackers. Only the degrees here translate to how much of our personal information is out there, available to them for exploiting.

Sadly, because of all the many data breaches over the decades, there is enough information about all of us already available. Hackers can use this data to find just about anyone, anywhere.

They can use the data, even train generative AI programs, and craft highly persuasive social engineering attacks.

This means the fewer the degrees separating you from a hacker, the more of a risk you, your devices, and that of your organization are from being hacked.

So, while you may not care about being connected to Tom Cruise, you should care about how many degrees separate you from a hacker.

Knowing this can reveal who is likely to be in the crosshairs of a hacker and who needs urgent protection. It can tell you about the Weakest Link in your personal network or in your organization.

But how does one figure out the degrees separating them from hackers?

There is a simple way. Developed by a Harvard trained technologist, there is an 8-question quiz that can give you the answer.

Here is the quiz: https://0oxloyflc3p.typeform.com/to/mTc2sV8Q

The quiz is free and it doesn’t collect any of the data you enter. The results are instantaneous and you will not need to enter your credit card number or your email information to get the results. The answer will reveal the degrees separating you from a hacker — and your cyber risk.

The answer is important to you. Use it. Protect yourself and those around you. Share this with people and organizations you care about.

 

 

 

 

*A version of this post appeared here: https://medium.com/@avishy001/it-may-not-matter-how-close-you-are-to-tom-cruise-but-it-matters-how-close-you-are-to-a-hacker-f4f35f333080

Photo by Giorgio Trovato on Unsplash

Security is more like a seat belt than a technical challenge. It’s time for developers to shift away from a product-first mentality and craft defenses that are built around user behaviors.

Technology designers begin by building a product and testing it on users. The product comes first; user input is used to confirm its viability and improve upon it. The approach makes sense. McDonald’s and Starbucks do the same. People can’t imagine new products, just like they can’t imagine recipes, without experiencing them.

But the paradigm also has been extended to the design of security technologies, where we build programs for user protection and then ask users to apply them. And this doesn’t make sense.

Security isn’t a conceptual idea. People already use email, already browse the Web, use social media, and share files and images. Security is an improvement that is layered over something users already do when sending emails, browsing, and sharing online. It’s similar to asking people to wear a seat belt.

Time to Look at Security Differently

Our approach to security, though, is like teaching driver safety while ignoring how people drive. Doing this all but ensures that users either blindly adopt something, believing it’s better, or on the flip side, when forced, merely comply with it. Either way, the outcomes are suboptimal.

Take the case of VPN software. These are heavily promoted to users as a must-have security and data-protection tool, but most have limited to no validity. They put users who believe in their protections at greater risk, not to mention that users take more risks, believing in such protections. Also, consider the security awareness training that is now mandated by many organizations. Those who find the training to be irrelevant to their specific use cases find workarounds, often leading to nonenumerable security risks.

There’s a reason for all this. Most security processes are designed by engineers with a background in developing technology products. They approach security as a technical challenge. Users are just another action into the system, no different than software and hardware that can be programmed to perform predictable functions. The goal is to contain actions based on a predefined template of what inputs are suitable, so that the outcomes become predictable. None of this is premised on what the user needs, but instead reflects a programming agenda set out in advance.

Examples of this can be found in the security functions programmed into much of today’s software. Take email apps, some of which allow users to check an incoming email’s source header, an important layer of information that can reveal a sender’s identity, while others don’t. Or take mobile browsers, where, again, some allow users to check the SSL certificate quality while others don’t, even though users have the same needs across browsers. It’s not like someone needs to verify SSL or the source header only when they’re on a specific app. What these differences reflect is each programming group’s distinct view of how their product should be used by the user — a product-first mentality.

Users purchase, install, or comply with security requirements believing that the developers of different security technologies deliver what they promise — which is why some users are even more cavalier in their online actions while using such technologies.

Time for a User-First Security Approach

It’s imperative that we invert the security paradigm — put users first, and then build defense around them. This is not only because we must protect people but also because, by fostering a false sense of protection, we’re fomenting risk and making them more vulnerable. Organizations also need this to control costs. Even as the economies of the world have teetered from pandemics and wars, organizational security spending in the past decade has increased geometrically.

User-first security must begin with an understanding of how people use computing technology. We have to ask: What is it that makes users vulnerable to hacking via email, messaging, social media, browsing, file sharing?

We have to disentangle the basis for risk and locate its behavioral, cerebral, and technical roots. This has been the information that developers have long ignored as they built their security products, which is why even the most security-minded companies still get breached.

Pay Attention to Online Behavior

Many of these questions have already been answered. The science of security has explained what makes users vulnerable to social engineering. Because social engineering targets a variety of online actions, the knowledge can be applied to explain a wide swath of behaviors.

Among the factors identified are cyber-risk beliefs — ideas users hold in their mind about the risk of online actions, and cognitive processing strategies — how users cognitively address information, which dictates the amount of focused attention users pay to information when online. Another set of factors are media habits and ritualsthat are partly influenced by the types of devices and partly by organizational norms. Together, beliefs, processing styles, and habits influence whether a piece of online communication — email, message, webpage, text — triggers suspicion.

Photo by Dex Ezekiel on Unsplash

Train, Measure, and Track User Suspicions

Suspicion is that unease when encountering something, the sense that something is off. It almost always leads to information seeking and, if a person is armed with the right types of knowledge or experience, leads to deception-detection and correction. By measuring suspicion along with the cognitive and behavioral factors leading to phishing vulnerability, organizations can diagnose what made users vulnerable. This information can be quantified and converted into a risk index which they can use to identify those most at risk — the weakest links — and protect them better.

By capturing these factors, we can track how users get co-opted through various attacks, understand why they get deceived, and develop solutions to mitigate it. We can craft solutions around the problem as experienced by end users. We can do away with security mandates, and replace them with solutions that are relevant to users.

After billions spent putting security technology in front of users, we remain just as vulnerable to cyberattacks that emerged in the AOL network in the 1990s. It’s time we changed this — and built security around users.

A version of this article can be found here: https://www.darkreading.com/risk/build-security-around-users-a-human-first-approach-to-cyber-resilience

Photo by Philipp Katzenberger on Unsplash

Defend against phishing attacks with more than user training. Measure users’ suspicion levels along with cognitive and behavioral factors, then build a risk index and use the information to better protect those who are most vulnerable.

 

 

As Russian tanks creaked into Ukraine, CEOs and IT managers throughout the United States and much of the free world started sending out emails warning their employees about impending spear-phishing attacks.

It made sense: Spear-phishing was what Russians had used on Ukrainians many times in the past half of a decade, such as when they shut down the country’s electrical grid on one of its coldest winter nights. It was also what the Russians had used against the Democratic National Committee and targets across the US.

At one end, the email missives from CEOs were refreshing. People were serious about the threat of phishing, which wasn’t the case in 2014 when I started warning about its dangers on CNN.

At the other end, it was sobering. There wasn’t much else organizations had figured out to do.

Sending messages to warn people was what AOL’s CEO resorted to back in 1997, when spear-phishing first emerged and got its name. Budding hackers of the time were impersonating AOL administrators and fishing for subscribers’ personal information. That was almost three decades ago, many lifetimes in Internet years.

In the interim, organizations have spent billions on security technologies and countless hours in security training. For context, a decade ago, Bank of America (BoA) was spending $400 million on cybersecurity. It now spends $1 billion per year on it. Yet thousands of its customer accounts in California were hacked last year.

And BoA isn’t alone. This year, Microsoft, Nvidia, Samsung, LG, and T-Mobile — which recently paid out a $350 million settlement to customers because of a breach in 2021 — were hacked. All fell victim to spear-phishing attacks. No question that the employees in these companies are experienced and well-trained in detecting such attacks.

Photo by King’s Church International on Unsplash

Flawed Approach

Clearly, something is fundamentally flawed in our approach, when you consider that after all this, email-based compromises increased by 35% in 2021, and American businesses lost over $2.4 billion due to it.

A big part of the problem is the current paradigm of user training. It primarily revolves around some form of cyber-safety instruction, usually following a mock phishing email test. The tests are sent periodically, and user failures are tracked — serving as an indicator of user vulnerability and forming the backbone of cyber-risk computations used by insurers and policymakers.

There is limited scientific support for this form of training. Most point to short-term value, with its effects wearing off within hours, according to a 2013 study. This has been ignored since the very inception of awareness as a solution.

There is another problem. Security awareness isn’t a solution; it’s a product with an ecosystem of deep-pocketed vendors pushing for it. There is legislation and federal policy mandating it, some stemming from lobbying by training organizations, making it necessary for every organization to implement it and users to endure it.

Finally, there is no valid measurement of security awareness. Who needs it? What type? And how much is enough? There are no answers to these questions.

Instead, the focus is on whether users fail a phishing test without a diagnosis of the why — the reason behind the failures. Because of this, phishing attacks continue, and organizations have no idea why. Which is why our best defense has been to send out email warnings to users.

Defend With Fundamentals

The only way to defend against phishing is to start at the fundamentals. Begin with the key question: What makes users vulnerable to phishing?

The science of security already provides the answers. It has identified specific mind-level or cognitive factors and behavioral habits that cause user vulnerability. Cognitive factors include cyber-risk beliefs — ideas we hold in our minds about online risk, such as how safe it might be to open a PDF document versus a Word document, or how a certain mobile OS might offer better protection for opening emails. Many such beliefs, some flawed and others accurate, govern how much mental attention we pay to details online.

Many of us also acquire media habits, from opening every incoming message to rituals such as checking emails and feeds the moment we awake. Some of these are conditioned by apps; others by organizational IT policy. They lead to mindless reactions to emails that increase phishing vulnerability.

There is another, largely ignored, factor: suspicion. It is that unease when encountering something; that sense that something is off. It almost always leads to information seeking and, armed with the right types of knowledge or experience, leads to deception-detection and correction.

It did for the former head of the FBI. Robert Muller, after entering his banking information in response to an email request, stopped before hitting Send. Something didn’t seem right. In the momentary return to reason caused by suspicion, he realized he was being phished, and changed his banking passwords.

By measuring suspicion along with the cognitive and behavioral factors leading to phishing vulnerability, organizations can diagnose what makes users vulnerable. This information can be quantified and converted into a risk index, with which they can identify those most at risk, the weakest links, and protect them better.

Doing this will help us defend users based on a diagnosis of what they need, rather than a training approach that’s being sold as a solution — a paradigm that we know doesn’t work.

After billions spent, our best approach remains sending out email warnings about incoming attacks. Surely, we can do better. By applying the science of security, we can. And we must — because spear-phishing presents a clear and present danger to the Internet.

*A version of this article appeared here: https://www.darkreading.com/vulnerabilities-threats/time-to-change-our-flawed-approach-to-security-awareness

Photo by Markus Spiske on Unsplash

In 2016, Lazarus, a notorious hacking group, aimed to steal a billion dollars through the SWIFT interbank communication system. How did the group do it? Social engineering.

Using an innocuous email purporting to be from a job applicant, the hackers gained entry into Bangladesh’s central bank system almost a year earlier. Once in, they learned how SWIFT (the Society for Worldwide Interbank Financial Telecommunication) worked and began to transfer a billion dollars from the Federal Reserve Bank of New York. The heist was accidentally discovered when a staffer at the bank staffer rebooted a hacked printer, which spit out the New York Fed’s confirmation messages in its queue. This stalled that hack, but not before $81 million was stolen.

Lazarus Group members were from North Korea. Its hackers, given the limited access to computing, aren’t the best. Russia’s are. They have developed some of the most potent malware we have seen yet. And if China were to team up with Russia, and there is evidence it is likely to, then we are in for some increasingly brazen attacks.

For context, every major hack in the past decade has origins in one of these nations. Russian hackers slipped malicious code into SolarWinds’ Orion program and got access to the Pentagon and the Cybersecurity and Infrastructure Security Agency (CISA), the DHS office responsible for protecting federal networks. Most ransomware also has roots in Russia. Estimates are that one in three organizations globally is a victim of these attacks, and they are enormously lucrative for hackers. Last year, the meat packer JBS paid $11 million in ransom; Colonial Pipeline paid $5 million. Some of it was recovered, but all of us paid through increased prices. And almost all of this involved social engineering.

Add to this the hacking prowess of China. Data stolen from sources as varied as from the Office of Personnel Management (OPM) to every major retailer can be traced to China. According to reports, sophisticated mining operations there are helping Russians craft highly persuasive social engineering attacks.

Growing Russian Hacker Threat
Once isolated and removed from banking systems such as SWIFT, it’s a question of time until Russia turns more sharply toward hacking. And if the country’s currency implodes further and it no longer cares about the rules-based global economy, there will be no way to hold it to account and disruptions will increase. We will end up paying through ransom payments, supply shortages, and higher prices. We have to stop this at its source by protecting users — all of us — the primary conduit through which malware gets into organizations.

While at long last two major cybersecurity bills mandating ransomware reporting are being considered by Congress, the defense of users is still being ignored. That’s because our cybersecurity defense relies on technology vendors. The tech sector’s motivation is to develop more technology. We today have more proprietary technology, with more licenses being sold, than ever before. Bank of America, which a decade ago was spending $400 million on cybersecurity, is now spending a billion dollars. And after all that, thousands of the bank’s California customers’ were still hacked last year.

How Do We Prevent Cyberattacks?
We need to change this paradigm. We need to invest in open source tools that are developed through private-public partnerships and make licenses available free of charge for at least the first five years to all organizations. This way, they can be applied widely, openly tested, and their value in organizational security can be ascertained.

The same extends to user training — one of the most widely applied, proactive cybersecurity solutions against spear-phishing. Almost all training today left to vendors, which offer many fee-based training programs. But how good is any of this? There is little data from cybersecurity firms on their effectiveness. The withholding of data has covered inefficiencies in training, which research studies repeatedly point out, and is extremely dangerous because the training programs give organizations a false sense of readiness.

Audits Are Needed
We need audits of organizational training, conducted by independent groups that aren’t motivated by the possibility of selling something more. CISA could set up such a team in the federal government that demonstrates how this can be accomplished. This can serve as a blueprint for IT managers in organizations, who are naturally risk-averse and less inclined to allow anyone to peer into their performance.

Finally, we need to get our netizens prepared for what’s coming. Like the civil defense drills we performed in the 1970s, we need to have cybersecurity drills that make everyone adept at dealing with social engineering. Everyone should have access to free security training and open source backup and threat-detection tools. Organizations should make multifactor authentication the default on all online services. The same goes for credit and identity protection. All of our credit should be locked by default, and credit monitoring, which is a fee-based service, should be free.

Stopping cyberattacks is no longer an option. It is an existential requirement. We may not be able to put our boots on the ground to fight the Russians, but we must ensure that neither our data nor our money help fund their war efforts.

 

*A version of this post was published in Dark Reading

Many are starting to say that pandemic is near its end. That this is the last strain, the final gasp of the virus. But is it the end of the pandemic?  Or is it, as Churchill once said, just the end of the beginning.

The virus, now in its third year, has infected people in all continents and killed over 5 million people. It has kept mutating, with each version leaving a fresh trail of infections, crippled healthcare systems, and destroyed families. The latest mutation appears less lethal, but even before this strain appeared, many of us began suffering COVID fatigue: we were looking ahead to the past coming back–to where things were before the pandemic, where meetings were mostly face-to-face, where everyone commuted to work, and where most adults the world over spent their waking hours.

Photo by Duncan Kidd on Unsplash

I often imagine a farmer waking up somewhere in Poland in January 1940 looking at the smoldering remains of his farm after the advancing German forces destroyed everything in its wake. I can seem him shaking his head in disbelief, hoping that the worst was behind him. That things will be back to where it was; that somehow the war’s madness would soon be over; and that his life would likely go back to its old routine.

But it didn’t. Ever. What began in Poland would soon engulf the world, eventually costing millions in lives and causing untold misery. By war’s end in 1944, everything changed–a change that was ushered in by the technologies used in the war effort. Soon after the war came highways, intercontinental flights, and suburban life. Women entered the workforce in large numbers, more people went to college than ever before, American corporations became multinationals, and new world order, shaped by global trade and ballistic missiles, emerged. Gone was the idyll of the simple farm and its ability to sustain generations.

Are we in a similar space with COVID-19? 

Might we be like the Warsaw farmer, and people during many calamities, suffering from some form of historical shortsightedness? A longing for the past caused by a lack of agency, that makes us begrudgingly adjust with hope that soon, today, tomorrow, or in a just a few more weeks, it will all be back to living and working as we once did, oh not so long ago.

Or maybe there is something less historical, perhaps even natural. Evolutionary theory suggests that organisms reacting to massive changes in the environment seldom return to a former state, even if the conditions are reversed. It is a law of irreversibility that comes from developing adaptations. And many of us, as individuals and organizations, have adapted how we live, work, and learn during the pandemic.

These have ushered in changes that are likely to continue.

Photo by Annie Spratt on UnsplashPhoto by Annie Spratt on UnsplashPhopop

For one thing, the organizational landscape might forever be different. The culture of urban work-life that was shaped by the second world war, where most adults worked in tall office buildings and small cubicles, has adapted into one in which most adults partly or wholly work from home. The challenge for organizations would be to accept this new reality, which many are unwilling to do. Organizations will need to find novel ways to develop a shared culture and keep people vested in the organization’s mission in the absence of interpersonal interactions, where camaraderie and a shared vision organically develop. This would likely increase demand for off-site meeting and shared work spaces and give rise to a whole new world of work-at-home computing and personal services.

Photo by Mohammad Shahhosseini

Another disruption has been to the system of education, especially higher education in the US, which for decades has focused on expanding the campus model, even as their tax-payer support has shrunk. Universities across the nation have made-up their budgetary deficits by increasing tuitions, making students shoulder the financial burden for it. This has squeezed family budgets and saddled students with crippling debt, which has risen to historical levels in the past decade. Already, campuses nationwide are reporting the lowest ever undergraduate enrollment rates, likely because of uncertainty about the future. But rather than embrace this new reality, throughout the pandemic, colleges and universities have been trying to get students back into campus, to get back to things as they once were. Some have even expanded their campuses.

The need of the day is for more virtual programs, maybe even hybrid offerings, with perhaps just a part of the degree requiring a stay on campus. This would reduce the cost of education, making higher education more affordable and accessible. Like it or not, higher education has changed. In the coming years, universities the world over will expand their online offerings, providing students competitive alternatives, and it’s time the American public university system shaped up.

Finally, the pandemic has changed how people live. With everyone spending more time at home, many are moving out of cramped city apartments to neighboring boroughs and cities, with larger, more spacious housing stocks. Consumption is shifting in ways not seen since the end of the second world war. People are buying electric cars, using touch-less payment systems, home delivery services, and fitness apps. Office buildings in central business districts are becoming less attractive as are long commutes on gas guzzling automobiles. There is a new world order emerging, much like it did after the second world war, with green energy, silicon chips, and cybersecurity becoming the new theaters of competition and conflict.

Photo by Bob Osias on Unsplash

It was technology that reshaped the world after the second world war. Thanks to it, the war’s destruction was followed by creative expansion and prosperity the likes of which we had never seen before. Now technology is doing the same. From virtual education to remote work, electric cars, and bitcoins, the disruptions in business, finance, and our way of life are just starting—and a metaverse of opportunities is coming online. The world before the pandemic is prologue. And the world of tomorrow filled with opportunity is here, as long as we accept it.

*An earlier version of this piece was published in Medium and LinkedIn.

An earlier version of this post appeared on CNN

By now, we have all heard about last week’s Colonial Pipeline ransomware attack that caused a shutdown of the 5,500-mile pipeline responsible for carrying fuel from refineries along the Gulf Coast to New Jersey. The disruption led to stranding gasoline supplies across half the East coast, raising gas prices at the pump and to some states preemptively declaring an emergency.

After six days, the company announced the pipeline launched the restart of its operations Wednesday evening and that it’ll take several days for service to return to normal. But Colonial’s information technology (IT) department — and the cybersecurity community as whole — could have ensured this never happened.

The attack was stoppable because ransomware isn’t new. By 2015, ransomware was already leaving a trail of corrupted data from victims all over. The infamous Sony Pictures hack in late 2014 was due to it, and there had already been attacks on a string of hospitals and law firms. In 2016, I wondered if that would be the year of online extortion.

I was wrong because it wasn’t just 2016 — it’s been every year since.

In 2020, nearly 2,400 local governments, health care facilities and schools were victims of ransomware. The average downtime because of it was 21 days, with an average payment of $312,493 — a 171% increase over 2019, according to an analysis by the Institute for Security and Technology.

We cannot afford this. Neither at the gas pump nor as a nation where most are already economically strained.

I also offered a series of suggestions. Fixing the technical problems (by better securing networks and computing systems), improving national and international law enforcement efforts (by centralizing breach reporting, coordinating remediating, strengthening legislation) and fixing the user problem (by applying social science to educate users and improve their cyberhygiene). My hope was to get policy makers and the cybersecurity community to focus on these issues — because it would have stopped this attack from ever happening.

Sadly, the cybersecurity community focused on what they like to focus on — technology.

Like the parable of the man searching for his keys under the streetlight rather than near his car where he’d lost them, the security community’s efforts focused on the hacker’s technical sophistication, the complexity of their malware and the byzantine lines of code they had to rewrite. Their solutions were commensurately complex: more complex encryption algorithms, more granular network monitoring and more layers of software.

At the policy front, late last month, a Ransomware Task Force made up of representatives of technology firms submitted an 81-page report to President Joe Biden. Priority recommendations included the need for aggressive enforcement, establishing cyberresponse and recovery funds and regulating cryptocurrency. But other than creating a national awareness campaign and providing more security awareness training in organizations, there was little proactively called for to protect the primary point of ingression — users.

All of these — be it the technical fixes or the policy recommendations — while pertinent and necessary to adopt, merely stop hackers after they are in the network or prosecute them after the fact.

Ransomware attacks occur because of how easy it is for the attacker to come into a computing network. They do so using spear phishing that deceives users into clicking on a malicious hyperlink or attachment. It’s how almost 50% of all ransomware gets a foothold into networks, according to Verizon’s 2020 Data Breach Investigations Report.

And according to the FBI’s Internet Crime Complaint Center (IC3), the number of phishing attacks doubled in 2020 as more of us work from home, away from organizational IT protections. Hackers stole people’s identity, corrupted data and extorted money — with estimated losses of $4.2 billion.

All this while we tried to fight technology fires after they have raged or strike back with even more technology.

The only way to stop spear phishing, and with it ransomware, is to deal with what we have ignored — or merely paid lip service to — the user. We need more than just media awareness campaigns. Because by now, every user is aware of phishing. Besides, much of our present training teaches users about attacks that have occurred, not the attacks that are yet to come, because no one, not even people in IT, know what they will be.

We need to invert the cybersecurity paradigm. Our policies cannot work from the technology organizations downwards, where standards and policies are created by a software manufacturer, a security company or a federal organization. IT security is not just a technological problem that can be gunned down with bigger technological bullets. It’s a user problem — one that can only be resolved by understanding users, who is at risk, why they are at risk and by helping them reduce it.

This requires us to put users first and work upwards towards solution. We need to apply the social science of users — much of which already exists — towards the problem. We already know the triggers in emails and messages that lead to deception in users. We know how users’ thinking, their cyberrisk beliefs and their technology habits influence spear phishing detection. And we also know how to measure and assess their levels of cyberhygiene.

But what we haven’t done is apply this towards protecting users. We can do this using the accumulated knowledge to build a user risk scoring system. This can work like financial credit score, only for cyberrisk.

Such scores would quantify risk and help users understand their level of vulnerability. It would also help organizations understand what users lack so they can be better protected. For instance, if someone lacks awareness or knowledge in an area, they can be provided this. However, if someone suffers from poor email-use habits, this can be addressed by changing their work patterns and improving their email hygiene.

In this way, policies, protections, even data access can be premised on user risk scores. And because these scores are based on the users’ mental and behavioral patterns, the scores are naturally impervious to changes in technology, making them future-proofed.

While the approach for doing this has been documented, it hasn’t been widely implemented. The reason for it is that the security community, made up mostly of engineers, doesn’t focus on users. For the engineer’s hammers, everything technical is nail. Spear phishing is considered a user problem — an external factor to the security model. And we have suffered the ramifications of this. It is why in 2014 the Sony Pictures hack happened. It is why the Colonial Pipeline hack occurred. And it is why such attacks will continue, until we change the security paradigm.

One of the many lessons of the pandemic is that simple solutions based on sound science work. Even as scientists applied cutting-edge pharmaceutical science to develop vaccines, simple social-behavioral solutions — wearing masks, washing hands, maintaining safe social distances — have been key to stop the spread Covid-19.

If we are lucky, we might just pay a small price at the gas pump because of the Colonial Pipeline ransomware attack. But there’s surely more coming. The social science fix for it already exists. The cybersecurity community must implement it.

The Colonial Pipeline hack is now making the news and many cyber security experts are providing their take on how to recover from it.

Of course, while this attack is new, such attacks aren’t. The Sony Pictures hack was also ransomware. And in 2016, there were many such attacks occurring. In response to them,  I’d written a piece on CNN asking if 2016 was the year of online extortion? This was after ransomware attacks on hospitals in California and Kentucky.

I had provided pointed solutions and called for a focus on users, rather than solely on technology. After all, they are the ingress points for ransomware, which almost always coming via spear phishing.

Unfortunately, every year since 2016 has led to bigger and more successful ransomware heists. The Verizon DBIR 2020 shows exactly how these attacks come in–and they come in through spear phishing.

And all along, we have–and we continue to– ignore user weaknesses and focus on the technical issues—almost always after a crippling breach.

This time, we are all paying a direct price at the gas pumps. Who knows what’s coming next?
The solutions from then are just as pertinent today.  Here’s my article in CNN from 2016. [Original can be found on the CNN website]

 

“This week, a hospital in western Kentucky was the latest organization to fall victim to a “ransomware” attack – a class of malware that encrypts all the files on a computer, only releasing them when a ransom is paid to the hacker holding the encryption key.

In this case, the hospital did not pay up. However, other hospitals, law firms, small businesses and everyday citizens have already paid anywhere from $200 to $10,000 in ransoms. Indeed, based on complaints received between April 2014 and June 2015, the FBI estimated that losses for victims from just one of these malware strains were close to $18 million.

Sadly, this year could well be worse.

Ransomware has existed for some time, the earliest dating back to the late 1980s. Back then, most was developed by enthusiasts – individuals testing out their skills. In contrast, today’s ransomware is often developed by global software teams that are constantly updating their codes to evade anti-virus software and selling them as off-the-shelf products.

Already, newer strains appear capable of infecting mobile devices, of encrypting files stored on cloud servers through mapped, virtual drives on computers, and of transitioning to the “Internet of Things” – infecting gadgets like watches and smart TVs that are going online. In the near future, the likelihood of an attack locking us out of our car, or worse yet in it, while we drive, demanding an immediate ransom, is becoming increasingly possible.

Thanks to the Internet, this malware-for-hire is available to virtually anyone, anywhere with criminal intent. Making things easier for hackers is the availability of Bitcoins, the online currency that makes monetary transactions untraceable. And making things even easier for them is our inability to stop spear phishing – those innocuous looking emails whose attachments and hyperlinks conceal the malware.

All this makes anyone with minimal programming skills and a free email account capable of inflicting significant damage, and with everyone from presidents to pensioners using emails today, the virtual pool of potential victims is limitless. No surprise then that cybersecurity experts believe that 2016 could well be the “Year of Online Extortion.”

But we can stop these insidious attacks, if everyone – individuals, organizations and policy makers – works towards a solution.

First, everyone must be taught to spot, sequester, and deal with spear phishing emails. This requires cybersecurity education that is free and widely available, which is presently not the case. While different training programs exist, most cater to large organizations, and are outside the reach of households, senior citizens and small businesses, who remain vulnerable.

What we also need is training that helps people develop better “cyber hygiene.” This includes teaching people to frequently update anti-virus software, appropriately program firewalls, and routinely back up their computers on discs that are then disconnected from the network. In addition, people should be taught how to deal with a ransomware attack and stop its spread by quickly removing connected drives and disconnecting from the Internet.

Second, organizations must do more to protect computer networks and employees. Many organizations continue to run legacy software, often on unsupported operating systems that are less secure and far easier for hackers to infiltrate. Nowhere is this problem more pressing than in small businesses, health care facilities, and state and federal government institutions, which is why they are the sought-after targets of ransomware.

Besides updating systems, organizations need to overhaul the system of awarding network privileges to employees. The present system is mostly binary, giving access to employees based on their function or status in the organization. Instead, what we need is a dynamic network-access system that takes into account the employees’ cyberrisk behaviors, meaning only employees who demonstrate good cyber hygiene are rewarded with access to various servers, networks, and programs through their devices.

Finally, policy makers must work to create a cyber crime reporting and remediation system. Most local law enforcement today is ill-equipped to handle ransomware requests, and harried victims usually have limited time to comply with a hacker’s demand. Many, therefore, turn to their family and friends, who themselves have limited expertise. Worse yet, some have no choice but to turn to the hacker, who in many cases provides a chat window to guide the victim through the “remediation” process.

What we urgently need is a reporting portal that is locally available and staffed by cybersecurity professionals, so people can quickly report a breach and get immediate support. Such a system currently exists, in the form of the existing 311 system for reporting nonemergency municipal service issues. It’s a system that has already been adopted by many cities in the nation, and allows for reporting via email, telephone, and smartphone apps. Strengthening this system by providing it the necessary resources to hire and train cyber security professionals, could go a long way towards stopping ransomware attacks that are now making their way past Main Street to everyone’s homes.

Perhaps the best way to look at the problem is this: How safe would we feel in a city where people are routinely being held hostage? Well, cyberspace is our space. And we have to make it safe.”

Photo by Marten Bjork

Verizon, AT&T, T-Mobile–I hope you are reading this. Mobile telephony, your primary business model of enabling phone calls and text messaging, is dying.

Your internal data likely says otherwise. Growth just appears to be everywhere: 5G’s enhanced mobile broadband speeds are coming alive, more people are subscribing with more gadgets, and some 60% of Americans are in mobile-only households–phenomena that were inconceivable two decades ago. Not to mention, the surge in network use due to the pandemic.

With this kind of growth, why would I say mobile telephony is dying? There are a few good reasons.

Text Messaging and Messaging Apps Reign Supreme

For one, people have stopped calling each other on their phones and are instead messaging. Note that I said messaging, which uses the Internet, and not texting that needs your network.

Messaging is increasingly popular, even preferred. You can be just as professional on it as you can be informal, and express your personality more richly, using emoticons, emojis, memojis, tapbacks, and more. And unlike phone calls, you don’t need to ask about the weather; nor do you need salutations, signatures, or statutory valedictions, as we do with email.

Messaging can be short, unintrusive, and direct. So, it works just as well for messaging colleagues down the hallway, family members in the other room, and friends in faraway places. For the security-minded, leading services are end-to-end encrypted, something that neither traditional texting nor its newer RCS incarceration in Google Messages supports.

Photo by G-R Mottez on Unsplash

Because of this, mobile messaging has been growing exponentially and, following email, accounts for roughly half of all mobile Internet usage. More importantly, 81–80 percent of millennials–the generation that came to age with social media and iDevices–use messaging apps like Facebook Messenger on mobile devices.

People Prefer Video Calls Instead of Phone Calls

Secondly, when people do call, they increasingly use video rather than voice, especially for making group calls. Video calling showed an estimate 175 percent increase in usage in the last 3 years, with one in four millennials using it on a daily basis. And this was before the pandemic made it a necessity and ushered in newer, arguably easy to use, apps such as Zoom, and made group calls for work, school, even television interviews, mainstream. While group video calls can be made on mobile devices, they are better on larger-screened laptops and tablets, which is bad news if you are a cellular provider–because none of these, again, require your service.

Not so long ago, the cellular providers dictated what people could use on their network. Today, the power has shifted to the gadget makers who provide the cameras, the noise-canceling earphones, the ability to seamless switch between devices when making video calls–and shape the experience. Because of this, the mobile phone number is becoming less important, while the device and how well it can sync-up with other devices owned by the user, is central to the user’s quality of experience.

Finally making things worse is that cellular networks have not been able to stem abuse on their network. Already in 2020, 58 billion robocalls were made to American residents’ mobile phone, for an average 80.6 calls per person — and this was a 22 percent increase over the previous year. Many are phishing calls and texts that appear to come from local area codes and are attempts at deceiving users into paying fraudulent IRS dues, threatening various dire legal actions, or luring users into opening malicious hyperlinks in text messages.

Phishing And Robocalls Deteriorates Trust In Cellular-Based Calling

Phishing is made possible by Internet-based telephony, which makes it possible for attacks to be fomented from anywhere in the world and avoid prosecution. Also enabling them is our caller-ID system, which was originally developed for the home phone network when there were few providers who could all be trusted. Caller-ID’s thus assumed all callers were honest and displayed whatever number was programmed in by them. Today, this makes it possible for anyone using computerized phone-dialers to obfuscate the true source of phone calls and fake the phone numbers that show up on our caller IDs.

The phone carriers, however, don’t recognize the nuisance these calls cause. So, even though they have developed apps to block such calls, they charge an additional fee for them. But consumers, long sold on different cellular networks’ delivery quality with “Can you hear me now?” promises, are unwilling to pay for something they believe should be dealt with by the carriers. Thus, rather than pay for the app, users keep their mobile devices on silent-mode, ignoring incoming calls and texts. For many millennials, this likely furthers their shift to messaging and video calling.

The risk of being silenced, especially by this important consumer psychographic, could have a profound impact on the future of the cellular network. In the past, consumers in similar age cohorts have shown to be relatively quick in moving away from services that didn’t consider their interests ahead of the organization’s bottom-line.

Much like cellular networks today, back in the 1990s, the home phone networks reigned supreme. Their primary business was long distance, for which they kept charging exorbitantly. In 1997, long-distance rates at 12–25 cents per minute, up 25 percent since 1992. The future looked so bright that the former head of AT&T’s long-distance, Joseph P. Nacchio, remarked: “Long distance is still the most profitable business in America, next to importing illegal cocaine.

At the time, there were just 50 million mobile subscribers, all of who also had a home phone. Within a few years, that generation of 22–40-year old’s quickly adopted Internet and mobile telephony, which all but killed the traditional phone business.

Today’s millennials are not only in the same age cohort but they are also now the majority of American residents. They have already dropped their home phones for mobile, and their cable subscriptions for streaming video. Their cellular phone plans may well be next.

 

*A version of this post appeared here: https://blog.ipswitch.com/mobile-telephony-is-dying-heres-why

**Follow this link for source of photographs

Vulnerabilities in cloud-sharing services stem from the usage of multiple cloud services because of which users need to keep adapting and adjusting their exceptions.

In part 1, I discussed some major vulnerabilities using cloud-sharing services caused. This included routine cloud usage leading to users opening emails from unknown addresses; complying with form-emails with no personalized messages or subject lines; clicking on unverifiable hyperlinks in emails, and violating cyber safety training drills that cautions against all the aforementioned actions. Security flaws in cloud-sharing services are, not in the user, but the developmental focus of various cloud services.

Different Authentication Considerations for Cloud Services

Some cloud services like Google Drive are focused on integrating their cloud offerings with their alphabet-soup of services. Others, such as Dropbox, are focused on creating a stand-alone cross-platform portal. Still others, such as Apple, are focused on increasing subscription revenues from their device user population.

In consequence, not only does each provider prioritize a different aspect of the sharing process, but this larger goal also comes before the user–who is nothing more than a potential revenue target.

Changing this means more than implementing more robust authentication protocols and encryption standards. These are necessary, but they do little to reduce vulnerabilities that are rooted in the user adaption process. If anything, they make users have to adapt to even more varying implementations. Improving resilience in cloud platforms cannot be done on a piece-meal basis; it requires a unified effort by cloud service providers.

Integrating Different Cloud Services

Here, industry groups such as the Cloud Security Alliance can help by bringing together various cloud providers, and taking a holistic look into how users adapt to different cloud-use environments and estimate risk on them.

A big part of this endeavor will involve getting to the root of users’ Cyber Risk Beliefs (CRB): their mental assumptions about the risks of their online actions. We know from my research that many of these risk beliefs are inaccurate. For instance, many users mistakenly construe the HTTPS symbol to mean a website is authentic, or that a PDF document is more secure because they cannot edit it, than a Word document.

We need to understand how CRB manifest themselves in cloud environments. This involves answers to questions such as whether users believe that certain cloud services are more secure than others? And whether they think that such services render the sharing of documents or the sharing of certain types of documents through them safer?

Answers to such questions will reveal how users mentally orient to different cloud services, what they store on them, and how they react to files shared through them. For instance, if users believe a specific portal makes documents safe, they might be more willing to open files that purportedly come from such portals in a spear-phishing email. Beliefs such as these might also influence how users enable various apps on to cloud portals, what they store online, and how careful they are about their stored data. Because CRB influences the adaptive use of different cloud services, understanding them can help design a safer cloud user experience.

Improving user security on cloud platforms also requires the development of novel technical constraints. Since many social engineering attacks conceal malware in hyperlinks, cloud portals need to collaborate and develop a virtualized space in which all shared links are generated and deployed. This way, spoofing of hyperlinks or leading users to watering hole sites is far more difficult because the domains from which the links are generated would be more uniform and recognizable to users.

User Interfaces of Cloud Services

Yet another focus needs to be on improving user interface (UI) design. For now, the UI of file-sharing programs prioritizes convenience rather than safety. This is a bias that permeates the technology design community, and its most marked manifestation is in mobile apps where it is hard for users to assess the veracity of cloud-sharing emails and embedded hyperlinks.

To change this, UI should foster more personalization of the shared files. Users shouldn’t be permitted to share links without messages or subject lines and should be prompted to include a signature in the message. The design must also deemphasize the actions that users have to take and emphasize review, especially on mobile devices. This can be achieved by highlighting the user’s personalized message, by displaying the complete URL rather than shortening it, and by making the use of passwords for opening shared documents necessary.

UI design could also focus on integrating file-sharing portals with email services, so that links aren’t being generated from the portal directly, but are created from within email accounts that people are familiar with. This way, emails aren’t being sent from unknown virtual in-boxes, and personalization becomes easier.

The Cloud Is A Victim Of Its Own Success

Finally, our extant user training on email security is at odds with end-user cloud sharing behavior. Using the cloud today entails violating training-based knowledge, which over time, changes user perceptions of the validity of training. We must update training to emphasize safety in the sharing and receiving of cloud files. This means foster newer norms and best practices, such as using passwords and personalized messages while sharing. It also includes teaching users how to gauge whether a shared hyperlink is a spoof and the approaches to deploying such links in virtualized environments, to contain any potential damage.

The cloud is becoming a victim of its own success. With many more players entering the market, the user experience is getting more fragmented and enhancing vulnerabilities because of the different ways in which they implement each platform. Today, there are hundreds of providers offering different cloud services, with many more coming online.

The industry is slated to grow even more, because we have barely tapped the overall potential market–with anywhere from 30 to 90 percent of the all organizations in the US, Europe, and Asia, yet to adopt it. Thus, the user issues are only likely to increase as more providers and users enter the space.

Correcting this is now more important than ever. Because a single major breach can erode user trust in the entire cloud experience–forever changing the cloud usage landscape.

*A version of this post appeared here: https://blog.ipswitch.com/data-security-in-the-cloud-part-2

**Photo source