CategoryCybersecurity

Stopping Russian Cyberattacks at Their Source [Published in Dark Reading]

Photo by Markus Spiske on Unsplash

In 2016, Lazarus, a notorious hacking group, aimed to steal a billion dollars through the SWIFT interbank communication system. How did the group do it? Social engineering.

Using an innocuous email purporting to be from a job applicant, the hackers gained entry into Bangladesh’s central bank system almost a year earlier. Once in, they learned how SWIFT (the Society for Worldwide Interbank Financial Telecommunication) worked and began to transfer a billion dollars from the Federal Reserve Bank of New York. The heist was accidentally discovered when a staffer at the bank staffer rebooted a hacked printer, which spit out the New York Fed’s confirmation messages in its queue. This stalled that hack, but not before $81 million was stolen.

Lazarus Group members were from North Korea. Its hackers, given the limited access to computing, aren’t the best. Russia’s are. They have developed some of the most potent malware we have seen yet. And if China were to team up with Russia, and there is evidence it is likely to, then we are in for some increasingly brazen attacks.

For context, every major hack in the past decade has origins in one of these nations. Russian hackers slipped malicious code into SolarWinds’ Orion program and got access to the Pentagon and the Cybersecurity and Infrastructure Security Agency (CISA), the DHS office responsible for protecting federal networks. Most ransomware also has roots in Russia. Estimates are that one in three organizations globally is a victim of these attacks, and they are enormously lucrative for hackers. Last year, the meat packer JBS paid $11 million in ransom; Colonial Pipeline paid $5 million. Some of it was recovered, but all of us paid through increased prices. And almost all of this involved social engineering.

Add to this the hacking prowess of China. Data stolen from sources as varied as from the Office of Personnel Management (OPM) to every major retailer can be traced to China. According to reports, sophisticated mining operations there are helping Russians craft highly persuasive social engineering attacks.

Growing Russian Hacker Threat
Once isolated and removed from banking systems such as SWIFT, it’s a question of time until Russia turns more sharply toward hacking. And if the country’s currency implodes further and it no longer cares about the rules-based global economy, there will be no way to hold it to account and disruptions will increase. We will end up paying through ransom payments, supply shortages, and higher prices. We have to stop this at its source by protecting users — all of us — the primary conduit through which malware gets into organizations.

While at long last two major cybersecurity bills mandating ransomware reporting are being considered by Congress, the defense of users is still being ignored. That’s because our cybersecurity defense relies on technology vendors. The tech sector’s motivation is to develop more technology. We today have more proprietary technology, with more licenses being sold, than ever before. Bank of America, which a decade ago was spending $400 million on cybersecurity, is now spending a billion dollars. And after all that, thousands of the bank’s California customers’ were still hacked last year.

How Do We Prevent Cyberattacks?
We need to change this paradigm. We need to invest in open source tools that are developed through private-public partnerships and make licenses available free of charge for at least the first five years to all organizations. This way, they can be applied widely, openly tested, and their value in organizational security can be ascertained.

The same extends to user training — one of the most widely applied, proactive cybersecurity solutions against spear-phishing. Almost all training today left to vendors, which offer many fee-based training programs. But how good is any of this? There is little data from cybersecurity firms on their effectiveness. The withholding of data has covered inefficiencies in training, which research studies repeatedly point out, and is extremely dangerous because the training programs give organizations a false sense of readiness.

Audits Are Needed
We need audits of organizational training, conducted by independent groups that aren’t motivated by the possibility of selling something more. CISA could set up such a team in the federal government that demonstrates how this can be accomplished. This can serve as a blueprint for IT managers in organizations, who are naturally risk-averse and less inclined to allow anyone to peer into their performance.

Finally, we need to get our netizens prepared for what’s coming. Like the civil defense drills we performed in the 1970s, we need to have cybersecurity drills that make everyone adept at dealing with social engineering. Everyone should have access to free security training and open source backup and threat-detection tools. Organizations should make multifactor authentication the default on all online services. The same goes for credit and identity protection. All of our credit should be locked by default, and credit monitoring, which is a fee-based service, should be free.

Stopping cyberattacks is no longer an option. It is an existential requirement. We may not be able to put our boots on the ground to fight the Russians, but we must ensure that neither our data nor our money help fund their war efforts.

 

*A version of this post was published in Dark Reading

The end of the beginning of COVID-19 [Published in Medium]

Many are starting to say that pandemic is near its end. That this is the last strain, the final gasp of the virus. But is it the end of the pandemic?  Or is it, as Churchill once said, just the end of the beginning.

The virus, now in its third year, has infected people in all continents and killed over 5 million people. It has kept mutating, with each version leaving a fresh trail of infections, crippled healthcare systems, and destroyed families. The latest mutation appears less lethal, but even before this strain appeared, many of us began suffering COVID fatigue: we were looking ahead to the past coming back–to where things were before the pandemic, where meetings were mostly face-to-face, where everyone commuted to work, and where most adults the world over spent their waking hours.

Photo by Duncan Kidd on Unsplash

I often imagine a farmer waking up somewhere in Poland in January 1940 looking at the smoldering remains of his farm after the advancing German forces destroyed everything in its wake. I can seem him shaking his head in disbelief, hoping that the worst was behind him. That things will be back to where it was; that somehow the war’s madness would soon be over; and that his life would likely go back to its old routine.

But it didn’t. Ever. What began in Poland would soon engulf the world, eventually costing millions in lives and causing untold misery. By war’s end in 1944, everything changed–a change that was ushered in by the technologies used in the war effort. Soon after the war came highways, intercontinental flights, and suburban life. Women entered the workforce in large numbers, more people went to college than ever before, American corporations became multinationals, and new world order, shaped by global trade and ballistic missiles, emerged. Gone was the idyll of the simple farm and its ability to sustain generations.

Are we in a similar space with COVID-19? 

Might we be like the Warsaw farmer, and people during many calamities, suffering from some form of historical shortsightedness? A longing for the past caused by a lack of agency, that makes us begrudgingly adjust with hope that soon, today, tomorrow, or in a just a few more weeks, it will all be back to living and working as we once did, oh not so long ago.

Or maybe there is something less historical, perhaps even natural. Evolutionary theory suggests that organisms reacting to massive changes in the environment seldom return to a former state, even if the conditions are reversed. It is a law of irreversibility that comes from developing adaptations. And many of us, as individuals and organizations, have adapted how we live, work, and learn during the pandemic.

These have ushered in changes that are likely to continue.

Photo by Annie Spratt on UnsplashPhoto by Annie Spratt on UnsplashPhopop

For one thing, the organizational landscape might forever be different. The culture of urban work-life that was shaped by the second world war, where most adults worked in tall office buildings and small cubicles, has adapted into one in which most adults partly or wholly work from home. The challenge for organizations would be to accept this new reality, which many are unwilling to do. Organizations will need to find novel ways to develop a shared culture and keep people vested in the organization’s mission in the absence of interpersonal interactions, where camaraderie and a shared vision organically develop. This would likely increase demand for off-site meeting and shared work spaces and give rise to a whole new world of work-at-home computing and personal services.

Photo by Mohammad Shahhosseini

Another disruption has been to the system of education, especially higher education in the US, which for decades has focused on expanding the campus model, even as their tax-payer support has shrunk. Universities across the nation have made-up their budgetary deficits by increasing tuitions, making students shoulder the financial burden for it. This has squeezed family budgets and saddled students with crippling debt, which has risen to historical levels in the past decade. Already, campuses nationwide are reporting the lowest ever undergraduate enrollment rates, likely because of uncertainty about the future. But rather than embrace this new reality, throughout the pandemic, colleges and universities have been trying to get students back into campus, to get back to things as they once were. Some have even expanded their campuses.

The need of the day is for more virtual programs, maybe even hybrid offerings, with perhaps just a part of the degree requiring a stay on campus. This would reduce the cost of education, making higher education more affordable and accessible. Like it or not, higher education has changed. In the coming years, universities the world over will expand their online offerings, providing students competitive alternatives, and it’s time the American public university system shaped up.

Finally, the pandemic has changed how people live. With everyone spending more time at home, many are moving out of cramped city apartments to neighboring boroughs and cities, with larger, more spacious housing stocks. Consumption is shifting in ways not seen since the end of the second world war. People are buying electric cars, using touch-less payment systems, home delivery services, and fitness apps. Office buildings in central business districts are becoming less attractive as are long commutes on gas guzzling automobiles. There is a new world order emerging, much like it did after the second world war, with green energy, silicon chips, and cybersecurity becoming the new theaters of competition and conflict.

Photo by Bob Osias on Unsplash

It was technology that reshaped the world after the second world war. Thanks to it, the war’s destruction was followed by creative expansion and prosperity the likes of which we had never seen before. Now technology is doing the same. From virtual education to remote work, electric cars, and bitcoins, the disruptions in business, finance, and our way of life are just starting—and a metaverse of opportunities is coming online. The world before the pandemic is prologue. And the world of tomorrow filled with opportunity is here, as long as we accept it.

*An earlier version of this piece was published in Medium and LinkedIn.

The failures that led to the Colonial Pipeline ransomware attack [Published in CNN]

An earlier version of this post appeared on CNN

By now, we have all heard about last week’s Colonial Pipeline ransomware attack that caused a shutdown of the 5,500-mile pipeline responsible for carrying fuel from refineries along the Gulf Coast to New Jersey. The disruption led to stranding gasoline supplies across half the East coast, raising gas prices at the pump and to some states preemptively declaring an emergency.

After six days, the company announced the pipeline launched the restart of its operations Wednesday evening and that it’ll take several days for service to return to normal. But Colonial’s information technology (IT) department — and the cybersecurity community as whole — could have ensured this never happened.

The attack was stoppable because ransomware isn’t new. By 2015, ransomware was already leaving a trail of corrupted data from victims all over. The infamous Sony Pictures hack in late 2014 was due to it, and there had already been attacks on a string of hospitals and law firms. In 2016, I wondered if that would be the year of online extortion.

I was wrong because it wasn’t just 2016 — it’s been every year since.

In 2020, nearly 2,400 local governments, health care facilities and schools were victims of ransomware. The average downtime because of it was 21 days, with an average payment of $312,493 — a 171% increase over 2019, according to an analysis by the Institute for Security and Technology.

We cannot afford this. Neither at the gas pump nor as a nation where most are already economically strained.

I also offered a series of suggestions. Fixing the technical problems (by better securing networks and computing systems), improving national and international law enforcement efforts (by centralizing breach reporting, coordinating remediating, strengthening legislation) and fixing the user problem (by applying social science to educate users and improve their cyberhygiene). My hope was to get policy makers and the cybersecurity community to focus on these issues — because it would have stopped this attack from ever happening.

Sadly, the cybersecurity community focused on what they like to focus on — technology.

Like the parable of the man searching for his keys under the streetlight rather than near his car where he’d lost them, the security community’s efforts focused on the hacker’s technical sophistication, the complexity of their malware and the byzantine lines of code they had to rewrite. Their solutions were commensurately complex: more complex encryption algorithms, more granular network monitoring and more layers of software.

At the policy front, late last month, a Ransomware Task Force made up of representatives of technology firms submitted an 81-page report to President Joe Biden. Priority recommendations included the need for aggressive enforcement, establishing cyberresponse and recovery funds and regulating cryptocurrency. But other than creating a national awareness campaign and providing more security awareness training in organizations, there was little proactively called for to protect the primary point of ingression — users.

All of these — be it the technical fixes or the policy recommendations — while pertinent and necessary to adopt, merely stop hackers after they are in the network or prosecute them after the fact.

Ransomware attacks occur because of how easy it is for the attacker to come into a computing network. They do so using spear phishing that deceives users into clicking on a malicious hyperlink or attachment. It’s how almost 50% of all ransomware gets a foothold into networks, according to Verizon’s 2020 Data Breach Investigations Report.

And according to the FBI’s Internet Crime Complaint Center (IC3), the number of phishing attacks doubled in 2020 as more of us work from home, away from organizational IT protections. Hackers stole people’s identity, corrupted data and extorted money — with estimated losses of $4.2 billion.

All this while we tried to fight technology fires after they have raged or strike back with even more technology.

The only way to stop spear phishing, and with it ransomware, is to deal with what we have ignored — or merely paid lip service to — the user. We need more than just media awareness campaigns. Because by now, every user is aware of phishing. Besides, much of our present training teaches users about attacks that have occurred, not the attacks that are yet to come, because no one, not even people in IT, know what they will be.

We need to invert the cybersecurity paradigm. Our policies cannot work from the technology organizations downwards, where standards and policies are created by a software manufacturer, a security company or a federal organization. IT security is not just a technological problem that can be gunned down with bigger technological bullets. It’s a user problem — one that can only be resolved by understanding users, who is at risk, why they are at risk and by helping them reduce it.

This requires us to put users first and work upwards towards solution. We need to apply the social science of users — much of which already exists — towards the problem. We already know the triggers in emails and messages that lead to deception in users. We know how users’ thinking, their cyberrisk beliefs and their technology habits influence spear phishing detection. And we also know how to measure and assess their levels of cyberhygiene.

But what we haven’t done is apply this towards protecting users. We can do this using the accumulated knowledge to build a user risk scoring system. This can work like financial credit score, only for cyberrisk.

Such scores would quantify risk and help users understand their level of vulnerability. It would also help organizations understand what users lack so they can be better protected. For instance, if someone lacks awareness or knowledge in an area, they can be provided this. However, if someone suffers from poor email-use habits, this can be addressed by changing their work patterns and improving their email hygiene.

In this way, policies, protections, even data access can be premised on user risk scores. And because these scores are based on the users’ mental and behavioral patterns, the scores are naturally impervious to changes in technology, making them future-proofed.

While the approach for doing this has been documented, it hasn’t been widely implemented. The reason for it is that the security community, made up mostly of engineers, doesn’t focus on users. For the engineer’s hammers, everything technical is nail. Spear phishing is considered a user problem — an external factor to the security model. And we have suffered the ramifications of this. It is why in 2014 the Sony Pictures hack happened. It is why the Colonial Pipeline hack occurred. And it is why such attacks will continue, until we change the security paradigm.

One of the many lessons of the pandemic is that simple solutions based on sound science work. Even as scientists applied cutting-edge pharmaceutical science to develop vaccines, simple social-behavioral solutions — wearing masks, washing hands, maintaining safe social distances — have been key to stop the spread Covid-19.

If we are lucky, we might just pay a small price at the gas pump because of the Colonial Pipeline ransomware attack. But there’s surely more coming. The social science fix for it already exists. The cybersecurity community must implement it.

The Colonial Pipeline Hack Was Avoidable

The Colonial Pipeline hack is now making the news and many cyber security experts are providing their take on how to recover from it.

Of course, while this attack is new, such attacks aren’t. The Sony Pictures hack was also ransomware. And in 2016, there were many such attacks occurring. In response to them,  I’d written a piece on CNN asking if 2016 was the year of online extortion? This was after ransomware attacks on hospitals in California and Kentucky.

I had provided pointed solutions and called for a focus on users, rather than solely on technology. After all, they are the ingress points for ransomware, which almost always coming via spear phishing.

Unfortunately, every year since 2016 has led to bigger and more successful ransomware heists. The Verizon DBIR 2020 shows exactly how these attacks come in–and they come in through spear phishing.

And all along, we have–and we continue to– ignore user weaknesses and focus on the technical issues—almost always after a crippling breach.

This time, we are all paying a direct price at the gas pumps. Who knows what’s coming next?
The solutions from then are just as pertinent today.  Here’s my article in CNN from 2016. [Original can be found on the CNN website]

 

“This week, a hospital in western Kentucky was the latest organization to fall victim to a “ransomware” attack – a class of malware that encrypts all the files on a computer, only releasing them when a ransom is paid to the hacker holding the encryption key.

In this case, the hospital did not pay up. However, other hospitals, law firms, small businesses and everyday citizens have already paid anywhere from $200 to $10,000 in ransoms. Indeed, based on complaints received between April 2014 and June 2015, the FBI estimated that losses for victims from just one of these malware strains were close to $18 million.

Sadly, this year could well be worse.

Ransomware has existed for some time, the earliest dating back to the late 1980s. Back then, most was developed by enthusiasts – individuals testing out their skills. In contrast, today’s ransomware is often developed by global software teams that are constantly updating their codes to evade anti-virus software and selling them as off-the-shelf products.

Already, newer strains appear capable of infecting mobile devices, of encrypting files stored on cloud servers through mapped, virtual drives on computers, and of transitioning to the “Internet of Things” – infecting gadgets like watches and smart TVs that are going online. In the near future, the likelihood of an attack locking us out of our car, or worse yet in it, while we drive, demanding an immediate ransom, is becoming increasingly possible.

Thanks to the Internet, this malware-for-hire is available to virtually anyone, anywhere with criminal intent. Making things easier for hackers is the availability of Bitcoins, the online currency that makes monetary transactions untraceable. And making things even easier for them is our inability to stop spear phishing – those innocuous looking emails whose attachments and hyperlinks conceal the malware.

All this makes anyone with minimal programming skills and a free email account capable of inflicting significant damage, and with everyone from presidents to pensioners using emails today, the virtual pool of potential victims is limitless. No surprise then that cybersecurity experts believe that 2016 could well be the “Year of Online Extortion.”

But we can stop these insidious attacks, if everyone – individuals, organizations and policy makers – works towards a solution.

First, everyone must be taught to spot, sequester, and deal with spear phishing emails. This requires cybersecurity education that is free and widely available, which is presently not the case. While different training programs exist, most cater to large organizations, and are outside the reach of households, senior citizens and small businesses, who remain vulnerable.

What we also need is training that helps people develop better “cyber hygiene.” This includes teaching people to frequently update anti-virus software, appropriately program firewalls, and routinely back up their computers on discs that are then disconnected from the network. In addition, people should be taught how to deal with a ransomware attack and stop its spread by quickly removing connected drives and disconnecting from the Internet.

Second, organizations must do more to protect computer networks and employees. Many organizations continue to run legacy software, often on unsupported operating systems that are less secure and far easier for hackers to infiltrate. Nowhere is this problem more pressing than in small businesses, health care facilities, and state and federal government institutions, which is why they are the sought-after targets of ransomware.

Besides updating systems, organizations need to overhaul the system of awarding network privileges to employees. The present system is mostly binary, giving access to employees based on their function or status in the organization. Instead, what we need is a dynamic network-access system that takes into account the employees’ cyberrisk behaviors, meaning only employees who demonstrate good cyber hygiene are rewarded with access to various servers, networks, and programs through their devices.

Finally, policy makers must work to create a cyber crime reporting and remediation system. Most local law enforcement today is ill-equipped to handle ransomware requests, and harried victims usually have limited time to comply with a hacker’s demand. Many, therefore, turn to their family and friends, who themselves have limited expertise. Worse yet, some have no choice but to turn to the hacker, who in many cases provides a chat window to guide the victim through the “remediation” process.

What we urgently need is a reporting portal that is locally available and staffed by cybersecurity professionals, so people can quickly report a breach and get immediate support. Such a system currently exists, in the form of the existing 311 system for reporting nonemergency municipal service issues. It’s a system that has already been adopted by many cities in the nation, and allows for reporting via email, telephone, and smartphone apps. Strengthening this system by providing it the necessary resources to hire and train cyber security professionals, could go a long way towards stopping ransomware attacks that are now making their way past Main Street to everyone’s homes.

Perhaps the best way to look at the problem is this: How safe would we feel in a city where people are routinely being held hostage? Well, cyberspace is our space. And we have to make it safe.”

Mobile telephony is dying [Published in iPswitch]

Photo by Marten Bjork

Verizon, AT&T, T-Mobile–I hope you are reading this. Mobile telephony, your primary business model of enabling phone calls and text messaging, is dying.

Your internal data likely says otherwise. Growth just appears to be everywhere: 5G’s enhanced mobile broadband speeds are coming alive, more people are subscribing with more gadgets, and some 60% of Americans are in mobile-only households–phenomena that were inconceivable two decades ago. Not to mention, the surge in network use due to the pandemic.

With this kind of growth, why would I say mobile telephony is dying? There are a few good reasons.

Text Messaging and Messaging Apps Reign Supreme

For one, people have stopped calling each other on their phones and are instead messaging. Note that I said messaging, which uses the Internet, and not texting that needs your network.

Messaging is increasingly popular, even preferred. You can be just as professional on it as you can be informal, and express your personality more richly, using emoticons, emojis, memojis, tapbacks, and more. And unlike phone calls, you don’t need to ask about the weather; nor do you need salutations, signatures, or statutory valedictions, as we do with email.

Messaging can be short, unintrusive, and direct. So, it works just as well for messaging colleagues down the hallway, family members in the other room, and friends in faraway places. For the security-minded, leading services are end-to-end encrypted, something that neither traditional texting nor its newer RCS incarceration in Google Messages supports.

Photo by G-R Mottez on Unsplash

Because of this, mobile messaging has been growing exponentially and, following email, accounts for roughly half of all mobile Internet usage. More importantly, 81–80 percent of millennials–the generation that came to age with social media and iDevices–use messaging apps like Facebook Messenger on mobile devices.

People Prefer Video Calls Instead of Phone Calls

Secondly, when people do call, they increasingly use video rather than voice, especially for making group calls. Video calling showed an estimate 175 percent increase in usage in the last 3 years, with one in four millennials using it on a daily basis. And this was before the pandemic made it a necessity and ushered in newer, arguably easy to use, apps such as Zoom, and made group calls for work, school, even television interviews, mainstream. While group video calls can be made on mobile devices, they are better on larger-screened laptops and tablets, which is bad news if you are a cellular provider–because none of these, again, require your service.

Not so long ago, the cellular providers dictated what people could use on their network. Today, the power has shifted to the gadget makers who provide the cameras, the noise-canceling earphones, the ability to seamless switch between devices when making video calls–and shape the experience. Because of this, the mobile phone number is becoming less important, while the device and how well it can sync-up with other devices owned by the user, is central to the user’s quality of experience.

Finally making things worse is that cellular networks have not been able to stem abuse on their network. Already in 2020, 58 billion robocalls were made to American residents’ mobile phone, for an average 80.6 calls per person — and this was a 22 percent increase over the previous year. Many are phishing calls and texts that appear to come from local area codes and are attempts at deceiving users into paying fraudulent IRS dues, threatening various dire legal actions, or luring users into opening malicious hyperlinks in text messages.

Phishing And Robocalls Deteriorates Trust In Cellular-Based Calling

Phishing is made possible by Internet-based telephony, which makes it possible for attacks to be fomented from anywhere in the world and avoid prosecution. Also enabling them is our caller-ID system, which was originally developed for the home phone network when there were few providers who could all be trusted. Caller-ID’s thus assumed all callers were honest and displayed whatever number was programmed in by them. Today, this makes it possible for anyone using computerized phone-dialers to obfuscate the true source of phone calls and fake the phone numbers that show up on our caller IDs.

The phone carriers, however, don’t recognize the nuisance these calls cause. So, even though they have developed apps to block such calls, they charge an additional fee for them. But consumers, long sold on different cellular networks’ delivery quality with “Can you hear me now?” promises, are unwilling to pay for something they believe should be dealt with by the carriers. Thus, rather than pay for the app, users keep their mobile devices on silent-mode, ignoring incoming calls and texts. For many millennials, this likely furthers their shift to messaging and video calling.

The risk of being silenced, especially by this important consumer psychographic, could have a profound impact on the future of the cellular network. In the past, consumers in similar age cohorts have shown to be relatively quick in moving away from services that didn’t consider their interests ahead of the organization’s bottom-line.

Much like cellular networks today, back in the 1990s, the home phone networks reigned supreme. Their primary business was long distance, for which they kept charging exorbitantly. In 1997, long-distance rates at 12–25 cents per minute, up 25 percent since 1992. The future looked so bright that the former head of AT&T’s long-distance, Joseph P. Nacchio, remarked: “Long distance is still the most profitable business in America, next to importing illegal cocaine.

At the time, there were just 50 million mobile subscribers, all of who also had a home phone. Within a few years, that generation of 22–40-year old’s quickly adopted Internet and mobile telephony, which all but killed the traditional phone business.

Today’s millennials are not only in the same age cohort but they are also now the majority of American residents. They have already dropped their home phones for mobile, and their cable subscriptions for streaming video. Their cellular phone plans may well be next.

 

*A version of this post appeared here: https://blog.ipswitch.com/mobile-telephony-is-dying-heres-why

**Follow this link for source of photographs

Data Security In The Cloud: Part 2 [Published in iPswitch]

Vulnerabilities in cloud-sharing services stem from the usage of multiple cloud services because of which users need to keep adapting and adjusting their exceptions.

In part 1, I discussed some major vulnerabilities using cloud-sharing services caused. This included routine cloud usage leading to users opening emails from unknown addresses; complying with form-emails with no personalized messages or subject lines; clicking on unverifiable hyperlinks in emails, and violating cyber safety training drills that cautions against all the aforementioned actions. Security flaws in cloud-sharing services are, not in the user, but the developmental focus of various cloud services.

Different Authentication Considerations for Cloud Services

Some cloud services like Google Drive are focused on integrating their cloud offerings with their alphabet-soup of services. Others, such as Dropbox, are focused on creating a stand-alone cross-platform portal. Still others, such as Apple, are focused on increasing subscription revenues from their device user population.

In consequence, not only does each provider prioritize a different aspect of the sharing process, but this larger goal also comes before the user–who is nothing more than a potential revenue target.

Changing this means more than implementing more robust authentication protocols and encryption standards. These are necessary, but they do little to reduce vulnerabilities that are rooted in the user adaption process. If anything, they make users have to adapt to even more varying implementations. Improving resilience in cloud platforms cannot be done on a piece-meal basis; it requires a unified effort by cloud service providers.

Integrating Different Cloud Services

Here, industry groups such as the Cloud Security Alliance can help by bringing together various cloud providers, and taking a holistic look into how users adapt to different cloud-use environments and estimate risk on them.

A big part of this endeavor will involve getting to the root of users’ Cyber Risk Beliefs (CRB): their mental assumptions about the risks of their online actions. We know from my research that many of these risk beliefs are inaccurate. For instance, many users mistakenly construe the HTTPS symbol to mean a website is authentic, or that a PDF document is more secure because they cannot edit it, than a Word document.

We need to understand how CRB manifest themselves in cloud environments. This involves answers to questions such as whether users believe that certain cloud services are more secure than others? And whether they think that such services render the sharing of documents or the sharing of certain types of documents through them safer?

Answers to such questions will reveal how users mentally orient to different cloud services, what they store on them, and how they react to files shared through them. For instance, if users believe a specific portal makes documents safe, they might be more willing to open files that purportedly come from such portals in a spear-phishing email. Beliefs such as these might also influence how users enable various apps on to cloud portals, what they store online, and how careful they are about their stored data. Because CRB influences the adaptive use of different cloud services, understanding them can help design a safer cloud user experience.

Improving user security on cloud platforms also requires the development of novel technical constraints. Since many social engineering attacks conceal malware in hyperlinks, cloud portals need to collaborate and develop a virtualized space in which all shared links are generated and deployed. This way, spoofing of hyperlinks or leading users to watering hole sites is far more difficult because the domains from which the links are generated would be more uniform and recognizable to users.

User Interfaces of Cloud Services

Yet another focus needs to be on improving user interface (UI) design. For now, the UI of file-sharing programs prioritizes convenience rather than safety. This is a bias that permeates the technology design community, and its most marked manifestation is in mobile apps where it is hard for users to assess the veracity of cloud-sharing emails and embedded hyperlinks.

To change this, UI should foster more personalization of the shared files. Users shouldn’t be permitted to share links without messages or subject lines and should be prompted to include a signature in the message. The design must also deemphasize the actions that users have to take and emphasize review, especially on mobile devices. This can be achieved by highlighting the user’s personalized message, by displaying the complete URL rather than shortening it, and by making the use of passwords for opening shared documents necessary.

UI design could also focus on integrating file-sharing portals with email services, so that links aren’t being generated from the portal directly, but are created from within email accounts that people are familiar with. This way, emails aren’t being sent from unknown virtual in-boxes, and personalization becomes easier.

The Cloud Is A Victim Of Its Own Success

Finally, our extant user training on email security is at odds with end-user cloud sharing behavior. Using the cloud today entails violating training-based knowledge, which over time, changes user perceptions of the validity of training. We must update training to emphasize safety in the sharing and receiving of cloud files. This means foster newer norms and best practices, such as using passwords and personalized messages while sharing. It also includes teaching users how to gauge whether a shared hyperlink is a spoof and the approaches to deploying such links in virtualized environments, to contain any potential damage.

The cloud is becoming a victim of its own success. With many more players entering the market, the user experience is getting more fragmented and enhancing vulnerabilities because of the different ways in which they implement each platform. Today, there are hundreds of providers offering different cloud services, with many more coming online.

The industry is slated to grow even more, because we have barely tapped the overall potential market–with anywhere from 30 to 90 percent of the all organizations in the US, Europe, and Asia, yet to adopt it. Thus, the user issues are only likely to increase as more providers and users enter the space.

Correcting this is now more important than ever. Because a single major breach can erode user trust in the entire cloud experience–forever changing the cloud usage landscape.

*A version of this post appeared here: https://blog.ipswitch.com/data-security-in-the-cloud-part-2

**Photo source

Data Security in the Cloud: Part 1 [Published in iPswitch]

The adoption of public cloud computing makes user data less secure. And it’s not for the reasons most in IT realize.

In the first part of this series, I explain why; solutions follow in part 2.

Most users experience the cloud as online software and operating environments (e.g., Google’s App Engine, Chrome OS, Documents); and as online backup, storage, and file sharing systems (e.g., Dropbox, iCloud).

Adopting such services makes sense. Its providers have deeper resources, better technical talent, and more capabilities for predicting and reacting to adverse events. This lowers the probability of data loss and outages, be they because of accidental or malicious causes. Using cloud-based services reduces the in-house processing power requirements and also meets the varying data access needs that users have today. This reduces the costs of maintaining hardware, software, and support staff.

Most Companies Have Adopted the Public Cloud

Recognizing these advantages, some 91 percent of organizations worldwide have already adopted public cloud computing solutions and around 80 percent of enterprise workloads are expected to shift to cloud platforms by year’s end.

But cloud computing solutions also bring new technical challenges that can expose the enterprise to cyber-attacks. Many of these are well known in cyber security circles and have proven fixes. This includes mechanisms for auditing security vulnerabilities both at the provider end and on client machines, for assuring the availability and integrity of hosted services through encryption, and for granting and revoking access.

Outside of these, however, there are several vulnerabilities that arise from using cloud-services. These are user and usage driven issues that are ignored by most in IT who prefer to write-it off with the “people will always be a problem” adage rather than tackle them. In consequence, most of these threats are seldom researched, but they make the data hosted on the cloud even more susceptible to being hacked.

For one, using cloud-based file sharing routinizes the receipt of hyperlinks in emails. Keep in mind, hundreds of providers make-up this market space. Most organizations use at least five different cloud services and most users subscribe to an ecosystem of their own liking. These translate to numerous cloud-service generated hyperlinks that users frequently send and receive via emails and apps on different devices.

But once users get accustomed to complying with such emails, it routinizes opening hyperlinks, making them much more likely to click on malicious hyperlinks in spear phishing that mimic them.

Convenience is not Always Secure

Making things worse is the design of cloud-sharing services. In their bid to make it convenient, services such as Google Drive, Google Photos, and Dropbox, send out pre-crafted email notices of shared files.

The email notice usually contains only a few pieces of variable information: the name of the sender, the hyperlink, and some information about the file being shared through the link. The rest of the space is occupied by branding information (such as the name of the cloud provider and their logo). Thus, users have just a few pieces of information for judging the authenticity of what’s being shared.

But in many cloud services, while the email appears to come from the sender and has their name, it doesn’t come from their in-box. Instead, it comes from a different in-box, one that changes with the provider. For instance, Google Drive notifications come from a “drive-share-noreply@goolge.com” inbox, Dropbox comes from a ” no-reply@dropbox.com,” while Google Photos comes from a “noreply-010203c023b2d094394a@google.com,” where the alpha-numeric characters (randomly chosen for this example) change each time. No user can remember these in-boxes, so there is no way for users to know if these emails are indeed authentic. Furthermore, cyber security awareness training caution users about opening emails from strange and unknown in-boxes. Thus, every time users open a cloud-shared hyperlink, they have violated safety principles they were taught-–which erodes their belief in the validity of the other aspects of their security training, opening them up to even more online attacks.

Hyperlinks Shared Through Cloud Services

A similar issue plagues the hyperlinks shared through cloud-services. Most contain special symbols and characters, and there is no simple way for users to assess their veracity. Given how these are generated and shared, users cannot plug the hyperlink into a search engine or into a browser without deploying them. Nor can users forward privately shared links to a sandboxed device or to another person with expertise. All users can do is rely on the information in the email, which requires deploying the hyperlink.

Outside of the sent-mail and hyperlinks, the only other varying indicator in a cloud-sharing email is the extension of the shared document (such as whether it is a .DOCX or a .MOV file), which is usually accompanied by an icon showing the type of file attached (e.g., a PDF icon). These were never designed to serve as yardstick for gauging the veracity of shared files.

As my research on user cognition shows, people form several false assumptions about online risk. For instance, many people believe that PDF documents are secure because they cannot edit them, which, of course, has nothing to do with the security of the file-type. These mistaken assumptions, what I call Cyber Risk Beliefs, are not only trigged by icons and files extensions, but they also dictate how users react to them. So, seeing a PDF extension or icon–which can easily be spoofed–and believing it is secure, further increases the likelihood that users will open cloud sharing hyperlinks that may actually be spear phishing.

Finally, the display of all these pieces of information is further circumscribed on smartphone and tablets. Depending on the app and device, brand logos and other graphical information are sometimes not displayed, sender information is auto-populated from the device’s contact book, and the UI action buttons, as in “Open” Download” and “View” are made prominent. These are deliberately designed to move the user along to a decision–which almost always is to comply with the request rather than to pause or exercise conscious care.

Such design issues plague many communication apps accessed on mobile devices–something I highlighted in my 2019 Verizon DBIR write-up. But they are even more problematic in cloud-based file sharing because unlike email, which by default receivers expect to be personalized (as in have a subject line, some salutation, and always, a message), the established norms for cloud-sharing of files are exactly the opposite: users seldom expect personalization, almost never include a message, and don’t even know how to inject a subject-line. This not only makes it easier to create spoofed cloud sharing emails but users have a particularly hard time discerning them on mobile devices.

Wrapping Up Cloud Security

All these issues are usage driven and stem from the success of the cloud. This means they are unlikely to go away and the widespread adoption of the cloud–a market Gartner expects will exceed $220 billion by 2020–will only increase their scale.  Given the volume of data that is increasingly stored on the cloud, the availability of so many user level vulnerabilities are fodder for social engineers looking for easy ways to hack the data.

And this is already afoot: Dropbox, Google Drive, and Adobe accounts are now among the most common lures used in spear phishing emails. In 2019, one in four breaches in organizations involved cloud-based assets, and a whopping 77% of these breaches happened because of a phishing email or web application service, that is, the attacks spoofed cloud-service emails and contained hyperlinks that led users to watering holes.

Keep in mind that these vulnerabilities exist in almost all cloud services, which means breaches because of them can occur in any of them. But, because of how users form beliefs about online risk, a breach in one would likely undermine their trust in all cloud platforms. So, resolving these issues is necessary not just for better protecting data but also for ensuring the continued adoption of the cloud.

How we do this, I discuss in the part 2.

*A version of this post appeared here: https://blog.ipswitch.com/data-security-in-the-cloud-part-1

Why do we still teach our children ABC? [Published in Medium]

“Why do you teach me ABC?” My precocious preschooler pointed to the virtual QWERTY keyboard on the tablet: “Why not ASD?”

As someone who studies the diffusion of innovations — how people learn and adopt new ideas and techniques — I wondered why indeed?

And not just the ABC sequence. Many preschoolers already know words like Xbox, Yahoo and Zoom than xylophone, yacht, and zebra we have them rote. Wouldn’t teaching children the words that hold more meaning to them help keep pace with their experiences?

Of course, the QWERTY sequence is itself a product of modern technology. The layout was engineered by placing commonly typed characters farther apart to reduce the chance of font-keys in early manual typewriters from jamming when stuck together. Although completely unnecessary on today’s electronic keyboards, it has resisted all attempts over the past 50 years at improving its design. Teaching the sequence would, therefore, also be practical because it is the accepted norm, appearing in every input device from ATMs to airplane flight controllers.

Many people, however, believe that the ABC sequence has remained somewhat fixed, while in actuality it has changed over time. Our 26 alphabets began sometime around the 15th century BCE in the Sinai as 22-characters, evolved with the Greeks into 25, and on through the Romans into Latin and the present set of 26. Z, which used to appear after F in Old Latin, was replaced with G, and transposed to its present placement. Here, too, technology and human development played a role. With migration and the expansion of people’s vocabulary, new inflections in speech arose, necessitation newer alphabets such as W. With the invention of writing tools and printing technologies came cursive scripts, lowercase letters, and the development of standardized font families. Thus, the ABC sequence is nothing more than a norm that people have overtime agreed upon — no different from QWERTY.

But there is an even stronger argument for teaching the newer sequence. Keyboards are tools for expression, no different from what pens are to writing or language is to literacy. And the sooner you are proficient with the tools, the better you can get at using it. Just as cultures with written languages, because of their ability to transmit knowledge with far greater accuracy, evolved to overtake cultures with spoken language, being adept as using the tools of expression sooner could lead to a higher quality of knowledge transmission. Thus, adapting to QWERTY sequence sooner would confer an evolutionary advantage for our children and likely even for all of us.

But that’s not all. Today, computing technology has also altered the way we write. Not only do we not use quills and fountain-pens, we rarely write by hand. And this has happened rather fast, even faster than the centuries it took for the evolution of alphabets and font families. Raised in the 1970s, I was taught to write in cursive, a skill which is seldom taught in US schools anymore. Instead, children in 3rd and 4th grade today “write” on computers where not just the writing style but also the process of writing is different.

Because you can only rewrite a document that many times, writing by hand, even on manual typewriters, required thinking before committing words on paper. Modern computers make writing innumerable drafts possible, which makes thinking as we write, without paying attention to style, spelling, or grammar in the initial drafts, possible. This has led to a change in how we write. As the renowned social psychologist Daryl Bem advocates in his oft-cited guide “…write the first draft as quickly as possible without agonizing over stylistic niceties.

Newer word-processing apps have altered this process even further. While the ever-popular Microsoft Word allows for a sequential documentation of thoughts, newer apps like Textilus and Scrivener encourage non-sequential writing, allowing authors to tackle different sections, simultaneously, in draft form. Adding to this are advances in voice-to-text programs and machine-learning tools that can capture spoken words and suggest intelligent responses. Many of these, accessible at literally the flick of a wrist on many smartwatches and phones, have changed not just how we write but also our role as writers.

Photo by Austin Distel on Unsplash

Finally, our idea of literacy itself is expanding. It’s more than just about knowing to write; it’s about being able to express information creatively. Children need to not only be adept at computing but also at finding information online, crafting persuasive content, and, while all of doing this, protecting their information trails. This requires two additional skills: digital literacy and cyber hygiene. The former equips them with information assessment skills, so they can find the right information and protect against disinformation. The latter instils digital safety skills, so they can’t be manipulated online and their information isn’t compromised. Both are essential for thriving in the virtual world where most of them spend their waking hours, even more so now since the pandemic.

Children are already familiar with an alphabet soup of online service before they step into a classroom. These skills are, thus, best introduced in their formative learning, not in middle school and college where they are presently taught. This will ensure that the next generation is equipped to transmit information with even greater accuracy and creativity all the more sooner — an advantage that will accrue to them and to our society as a whole. The first step towards this involves mastering the QWERTY keyboard.

  • A version of this post appeared here: https://medium.com/@avishy001/why-do-we-still-teach-our-children-abc-7f8cde35ec39
  • **Photo source

COVID-19’s Lessons About Social Engineering [Published in Dark Reading]

Photo by Brian McGowan

Unless we do something proactively, social engineering’s impact is expected to keep getting worse as people’s reliance on technology increases and as more of us are forced to work from home.

Contact tracing, superspreaders, flattening the curve — concepts that in the past were the domain of public health experts are now familiar to people the world over. These terms also help us understand another virus, one that is endemic to the virtual world: social engineering that come in the form of spear-phishing, pretexting, and fake-news campaigns.

As quickly as the coronavirus began its spread, news reports cautioned users of social engineering attacks that tout fake cures and contact-tracing apps. This was no accident. In fact, there are a number of parallels between the human transmission of COVID-19 and social engineering outbreaks:

  1. Just like coronavirus transmits from person to person through respiratory droplets, social engineering also passes from users through infected computing devices to other users. Because of this transmission similarity, just as infected people, by virtue of their physical proximity to many others, act as superspreaders for COVID-19, some technology users act likewise. These tend to be people with many virtual friends or those subscribing to many online services who consequently have a hard time discerning a real notification or communication from one of these personas or services from a fake one. Such users are prime targets for social engineers looking for a victim who can provide a foothold into an organization’s computing networks.
  2. The vast majority of people infected with this coronavirus have mild to moderate symptoms. The same is the case with most victims of social engineering because hackers usually lurk imperceptibly as they make their way through corporate networks. They often go undetected for months — on average, at least 101 days— showing no signs or symptoms.
  3. Just as no one has immunity from COVID-19, no one is immune against social engineering. By now everyone, all over the world, has been targeted by social engineers, and many — trained users, IT professionals, cybersecurity experts, and CEOs — have fallen victim to a spear-phishing attack.
  4. COIVD 19’s outcomes are worse for people who have prior health conditions and for people who are older. Similarly, the outcomes of social engineering are worse for users with poor computing habits and poor technical capabilities. Many of these tend to be senior citizens and retired individuals who lack updated operating systems, patches that protect them from infiltration, and access to managed security services.
  5. Finally, personal hygiene — hand washing, use of masks, social isolation — is the primary protection against coronavirus infection. Likewise, for protecting against social engineering, digital hygiene — protecting devices, keeping updated virus protections and patches, and being careful when online — is the only protection that everyone from the FBI to INTERPOL has in their arsenal.

But beyond these similarities, social engineering outbreaks are actually harder to control than coronavirus infections:

1. Social engineering infections pass through devices wirelessly, making it hard to contact-trace infection sources, isolate machines, and contain them.

2.  There are well-established scientific processes that the medical community has developed to identify knowledge gaps about coronavirus. This helps researchers focus. In contrast, even the fundamentals of social engineering — such as when it’s correct to call an attack a breach or a hack — lacks clarity. It’s hard to do research in an area when there is no consensus on what the problem should be called or where it begins and ends.

3. While human hygiene is well researched, digital hygiene practices aren’t. For instance, in 2003, NIST developed password hygiene guidelines asking that all passwords contain letters and special characters and are changed every 90-days. The guideline was developed studying how computers guessed passwords, not how humans remembered them. Consequently, users the world over reused passwords, wrote them down on paper to aid their memory, or blindly entered them on phishing emails that mimicked various password-reset emails — until 2017, when these problems were recognized and the policy was reversed.

4. Evidence points to those who have recovered from coronavirus having at least short-term immunity to it. In contrast, organizations that have had at least one significant social engineering attack tend to be attacked again within the year. Because hackers learn from every attack, this suggests that the odds of being breached by social engineering actually increase with each subsequent attack.

5. Our response to COVID-19 is informed by reporting throughout the healthcare system. Unfortunately, there is no similar reporting mechanism for social engineering. For this reason, a hacker can conduct an attack in one city and replicate it in an adjoining city, all using the same malware that could have easily been defended against had someone notified others. We saw this trend play out in ransomware attacks that crippled computing systems in Louisiana’s Vernon Parish in November 2019, quickly followed by six other parishes, and continuing through the rest of the state in February 2020.

Because of these factors, the economic impact of social engineering continues to grow. There has been a 67% increase in security breaches in the past five years, and last year companies were expected to spend $110 billion globally to protect against it. This makes social engineering one of the biggest threats to the worldwide economy outside of natural disasters and pandemics.

Just as we are fighting the pandemic, we must coordinate our efforts to combat social engineering. Without it, there will be no vaccine or cure. To this end, we must develop intraorganizational reporting portals and early-warning systems to warn other organizations of breaches. We also need federal funding for basic research on the science of cybersecurity along with the development of evidence-based digital hygiene initiatives that provide best practices that take into account the user and their use cases. Finally, we must enlist social media platforms for tracing the superspreaders in their users, and develop open source awareness and training initiatives to protect them and the cyber-vulnerable from future attacks.

 

Unless we do something proactively, social engineering’s impact is expected to keep getting worse as people’s reliance on technology increases and as more of us are forced to work from home, away from the protected IT enclaves of organizations. We may in the end win the fight against the coronavirus, but the war against social engineering has yet to begin.

 

*A version of this piece appeared in Dark Reading: https://www.darkreading.com/endpoint/what-covid-19-teaches-us-about-social-engineering/a/d-id/1337979

Improving Everyone’s Ability to Work from Home After the Pandemic [Published in IPSwitch]

Photo by Corinne Kutz

Two out of three Americans with jobs are already working from home because of the pandemic. Many will have to continue if pandemic reoccurs. But millions are unable to and are without jobs, because of significant barriers imposed by technology, regulation, and organizational preparedness.

One technological barrier is the lack of universal high-speed Internet connectivity. People at home today run multiple devices for everything from making video calls to streaming entertainment, participating in meetings, and doing classwork. This requires fiber-based Internet access that allows gigabyte-speeds rather than older cable and telephone based connectivity.

But outside of major American cities, most of us are served by poor quality and lower speed Internet services built on outdated infrastructure. The reason is that in most market areas, legislative barriers have limited competition, keeping the cable and telephone companies as virtual monopolies that can charge higher prices whilst continuing to invest little in improving product quality.  Because of this, many in rural areas, the urban poor, and consumers in many smaller urban areas either don’t have good access, cannot afford it, or have limited choice.

Another technology barriers to remote work is the outdated software and operating systems that many companies utilize, which are incompatible with what people use at home. For instance, close to 82% of medical imaging devices in US hospitals still run Windows 7 and XP-based systems. There are about 200 million computers worldwide still running such outdated systems including 30,000 machines in Germany’s local government offices and 50,000 in Ireland’s healthcare system. The reason for such practices is legacy programs, those that can only run on older operating systems, that many organizations continue to support. But, because of such systems, people whose work relies on such older programs cannot work on them remotely from their updated computing devices at home.

Yet another barrier comes from data protection laws. From HIPPA that governs electronic patient health information (ePHI) access to the European Union’s data portability laws, various regulations protect user data from cyber criminals by restricting access to them outside of secure work computers and servers. But these laws were formulated in the pre-pandemic era, where employees had the luxury of working from offices. Layered on such laws are organizational IT policies, which often impose their own restrictions on how employees can access data.

Photo by Glen Carrie

But it is because of such restrictions that Facebook’s content moderators all over the globe cannot presently work from home—which has also reduced their ability to quell misinformation and online scams from going viral. Similarly, concerns of cyber breaches have led organizations to require their employees use virtual private network (VPN) services when connecting from home. Using a VPN is hard enough for users with poor technology skills, but even for the technologically adept, it lowers Internet speeds, especially when there is a signification increase in the load on VPN servers, as is now the case. Thus, regulatory concerns cause restrictions and delays that make for a frustrating remote work experience.

The final factor limiting remote work is cyber risk from the user. While many users can be trusted with remote data access, many others cannot. This is not just because some people have lower technical skills but also because many users’ digital hygiene levels are unknown. This is a pivotal issue because regulations such as HIPAA require organizations to conduct risk assessments to address vulnerabilities from remote data access. But this is easier said than done. In an era when the opening a single phishing email could launch ransomware that could jump from home to work networks and cripple the entire organization’s systems, the risk to the enterprise is not just from the employee working at home, but from their entire family. Hence, organizations would rather limit who can work remotely than risk a devastating enterprise-wide lockout.

Photo by Corinne Kutz

Making it possible for more of us to work remotely from home will require a concerted efforts from the government, educational institutions, and organizations.

The starting point to this is improving residential Internet access. The digital divide is no longer about just having Internet access, but having universal access to fiber at an affordable price. With 5G years away from being universal, we have to reimagine competition among Internet providers. This involves removing the legislative restrictions that prohibit competition among providers and, in some cases, fiber networks being developed by municipalities. A good example is Chattanooga, Tennessee, where the local government developed its own fiber network, which not only made gigabyte speed service locally available for a competitive price but also recovered the setup costs and led to a technology start-up boom in short order.

Next, organizations must plan on developing an agile workforce. Most current organizations support a fraction of their workforce’s remote work needs. For instance, the US Airforce VPN system is built to support only a quarter of its 275,000 civilian workers and contractors. Organizations can invert this by investing in virtualization to run legacy software, allowing more employees to bring their own devices (BYOD), and moving towards a cloud-based infrastructure. This will create the ability to run legacy software on remote machines while also quickly upgrading the technology being used within organizations.

The final issue is reducing cyber risk from users. Models for this already exist in the systems used for evaluating financial credit scores and giving automobile driver’s licenses, which were developed for similar reasons—to estimate risk and ensure that people meet minimal standards of performance and safety. Just as we do with driver’s licenses, we need to establish federal standards for user risk assessment that mandates cyber safety training and awards users with a personal cyber risk score. Cyber safety training must begin from K-12, when most already use computing systems, and become part of standard university curricula. Also, the risk scores should be portable between jobs, accessible to employers, and users should be capable of improving them through additional certifications provide by for-profit training companies. With everyone trained, the overall cyber risk to organizations from users will reduce as will their concerns about remote access.

Providing better Internet access, creating an agile workforce, and mandating cyber security training will help us combat not just this pandemic’s reoccurrence but also any future natural or manmade catastrophe. We have been saved from a complete economic meltdown by a technology—the Internet—that was built in anticipation of a nuclear fallout that thankfully never happened. Thanks to such forward thinking, we today have the capacity to continue working, teaching, even performing medical diagnosis online. Building capacity must likewise be done years if not decades in advance and we must prepare for a future where more people can continue working from home.

 

 

 

* A version of this post appeared here: https://blog.ipswitch.com/4-barriers-impeding-on-everyones-ability-to-work-from-home