CategoryCybersecurity

Spearphishing has become even more dangerous [Published in CNN]

The continued prosecution of “All the President’s Men” does little to stop the Russians from attempting to influence America’s upcoming midterm elections. And reports from Missourito Californiasuggest they are already looking for our cyber weaknesses to exploit.

Chief among these: spear phishing—emails containing hyperlinks to fake websites—that the Russians used to hack into the DNC emails and set in motion their 2016 influence campaign.

After two years of congressional hearings, indictments, and investigations, spear phishing not only continues to be the commonest attack used by hackers, but the Russians are still trying to use it against us.

The is because in the ensuing time, spear phishing has become even more virulent, thanks to the availability of sophisticated malware, some stolen from intelligence agencies; troves of people’s personal information from previous breaches; and ongoing developments in machine learning that can deep-dive into this data and craft highly effective attacks.

Just last week, Microsoft blocked six fake websitesthat were likely to be used for spear phishing the US Senate by the same Russian intelligence unit responsible for the 2016 DNC hack

But the Internet is vast and there are many more fundamental weaknesses still available for exploit.

Take the URLs with which we identify websites. Thanks to Internationalized Domain Names (IDNs)that allow websites to be registered in languages other than English, many fake websites used for spear phishing are registered using homoglyphs— characters from languages that look like English language characters. For instance, a fake domain for Amazon.com could be registered by replacing the English “a” or “o” with their Cyrillic equivalents. Such URLs are hard for people to discern visually and even email scanning programs, trained to flag words like “password” which are common in phishing emails, like the one the Russians in 2016 used to hack into Jon Podesta’s emails, can be tricked. And while many browsers prevent URLs with homoglyphs from being displayed, some like Firefox still expect users to alter their browser settings for protection.

Making things worse is the proliferation of Certification Authorities (CA), the organizations issuing digital certificates that make the lock icon and HTTPS appear next to a website’s name on browsers. While users are taught to trust these symbols, an estimated one in four phishing websites actually have HTTPS certificates. This is because some CA’s have been hacked, meaning there are many roguecertificates out there, while some others have doled out free certificates to just about anyone. For instance, one CA last year issued certificates to15000 websites with names containing some combination of the word PayPal—all for spear phishing.

Besides these, the problem of phony social media profiles, which the Russians used in 2016 for phishing, trolling and spreading fake news, remains intractable. Just last week, the Israel Defense Forces (IDF) reported a social media phishing campaign by Hamas, luring its troops to download malware using fake social media profiles on Facebook, Instagram, and Whatsapp. Also last week, Facebook, followed by Twitter, blocked profiles linked to Iranian and Russian operatives being used for spreading misinformation.

These attacks, however, reveal a critical weakness of influence campaigns: by design, they utilize overlapping profiles in multiple platforms. Yet, today, social media organizations internally police their networks and keep information in their own “walled gardens.”

A better solution would be to therefore host data on suspect profiles and pages in a unified, open-source repository, one that accepts inputs from other media organizations, security organizations, even users who find things awry. Such an approach would help detect and track coordinated social media influence campaigns—which would be of enormous value to law enforcement and even media organizations big and small, many of which get targeted using the same profiles.

A platform for this could be the Certificate Transparencyframework, where digital certificates are openly logged and verified, which has been adopted by many popular browsers and operating systems. For now, this framework only audits digital certificates but, it could be expanded to encompass domain name auditing and social media pages.

Finally, we must improve user education. Most users know little about homoglyphs and even less about how to change their browser settings to ensure against them. Furthermore, many users, after being repeatedly trained to look for HTTPS icons on websites, have come to implicitly trust them. Many even mistake such symbols to mean that a website is legitimate. Because even an encrypted site could be fraudulent, users have to be taught to be cautious, and to assess website factors ranging from the spelling used in the domain name, to the quality of information on the website, to its digital certificate and the CA who issued it. Such initiatives must be complemented with better, more uniform Internet browser design, so users do not have to tinker with settings to ensure against being phished. 

Achieving all this requires leadership, but the White House, which ordinarily would be best positioned to address them, recently fired its cybersecurity czar and eliminated the role. And when according to GAO, federal agencies have yet to address over a third of its 3000 cybersecurity recommendations, the President instead talks about developing a Space Force. Last we knew the Martians haven’t landed, but the Russians sure are probing our computer systems.

 

*A version of this post was published in CNN: https://www.cnn.com/2018/09/01/opinions/spear-phishing-has-become-even-more-dangerous-opinion-vishwanath/index.html

To reward, or not to reward [Published in InfoSecurity Magazine]

In late 2014, in the aftermath of the Sony Pictures Entertainment breach, I had advocated the development of a cyber breach reporting portal where individuals could report suspected cyber incidents. Such a system, I argued, would work as an early warning system so IT could be made aware of an attack before it become widespread; it would also work as a centralized system for remediation, so affected victims could seek help.

Since then many organizations all over the world have developed such portals for their employees to report suspected breaches. These range from web reporting forms and email in-boxes to 24-hour help-desks where employees can find remedial support.

While there is little direct research on how well these portals work, extant reports points to a rather low utilization rate. For instance, Verizon’s 2018 Data Breach Investigations Report (DBIR) found that among 10,000 employees across different organizations who were subjects of various test phishing campaigns, fewer than 17% reported it to IT. My own experience advising firms on their user vulnerability reduction initiatives have found similar low reporting rates.

To counter this, many CSOs have resorted to incentives and punishments to enhance employee reporting of suspect emails and cyber activities. But the question—one that I am often posed when advising organizations on IT security—is which of these really work?

First, let’s begin with punishments. We know from a century of research on human motivation that punishments tend be salient but not necessarily effective in motivating people the right way. That means people remember threats but it doesn’t help, especially when the task at hand requires mental effort.

For instance, when the former head of the NSA Admiral Rogers famously remarked that individuals who fall for a phishing test should be court-martialed— it sure got noticed and widely reported. But such actions lead to fear, anxiety, and worry, not more thoughtful action. This is precisely why phishing emails have warnings and threats in them—because when people focus on the threats, they end up ignoring the rest of the information on the email that could reveal the deception.

In surveys I have conducted in organizations that use punishments to foster reporting, the vast majority of users reported changing how they use email: many were avoiding opening email at certain times of the day, were waiting for people to resend their original email requests, or, in some cases, forwarding work emails to their non-IT authorized mobile devices and email accounts.

These may be effective ways of avoiding getting caught in a phishing test, but not necessary good for organizational productivity and cybersecurity.

On the flip side are rewards for reporting phishing emails. Some organizations have used monetary compensations, others have experimented with token rewards, and some others with mere recognition of the employee who reported. Which from these work the best? The surprising answer: recognition.

The reasons for it are as follows. First, monetary compensation puts a dollar amount to cybercrime reporting—a value that is difficult to determine. That is, do we estimate the value of a report based on the time the employee sent in the report, i.e., immediately after the attack versus much later, the accuracy of their report, or the size of the breach it prevented? Each estimation process has its own pitfalls and they all also focus on the report rather than the employee doing the report or what it means for them to actually perform the reporting function.

Monetary incentives have another problem: they turn reporting into a game. This changes the employee’s motivation, who rather than becoming more careful about scrutinizing incoming emails, which is the indirect purpose of such reporting, learn that more reporting increases their chances of winning a prize.

Consequently, many employees report anything they find troubling, sometimes emails even they know are simply spam. This, on the one hand, significantly increases the load on the IT helpdesks and decreases their chances of catching a real phishing email. On the other hand, too many unnecessary reports decrease the odds of winning a reward, which over time reduces the employees’ motivation for reporting.

Compared to this, social rewards such as public praise, recognition and appreciation through announcements acknowledging those users who have reported suspicious emails, along with appropriate communication, shows the value of this reporting works better than all other approaches.

This is because monetary incentives appeals to employees’ base needs, which are already met through their jobs, while social recognition appeals to higher order needs—what the famous motivational psychologist Abraham Maslow termed “esteem needs”: the human need for achievement, for respect, for prestige, and for a sense of accomplishment.

Being publicly recognized for reporting suspect emails makes employees feel valued for their effort at reporting, which on the face of it is an act of altruism that has little direct relationship to their workflow or productivity. Effectively communicating the value of their reporting, thus, focuses attention to the employees doing the report.

This has enduring effects, influencing both the employee being feted while also motivating others to follow their lead, which altogether leads to a culture of cyber safety within the organization.

As email-based attacks targeting organizations become more sophisticated, employees are the first, and at times the only, line of defense against them. Effectively harnessing the power of employees through the use of appropriate strategies for incentivizing reporting is the difference between organizations that are reacting to cyber-attacks and those that are proactively stopping them.

 

* A version of this post appeared in InfoSecurity Magazine

The impact AI will have on democracy [Published in CNN]

In the not-so-distant future, we will be presented with the version of the news we wish to read — not the news that some reporter, columnist or editorial board decides we need to read. And it will be entirely written by artificial intelligence (AI).

Think this is science fiction? Think again. Many of us probably don’t realize that AI programs were authoring many parts of the summer Olympics coverage, and also providing readers with up-to-date reports, personalized based on the reader’s location, on nearly 500 House, Senate, and gubernatorial races during the last election cycle.

And those news feeds on Facebook and Google News that the majority of people trust more than the original news sources, well those, too, employ machine-learning algorithms to match us with news and ads. And we saw how easily those were co-opted by the Russians to influence our last presidential elections.

Follow the natural progression of these developments, and it leads to an ominous future in which AI entirely writes and presents the news exactly the way each of us would like to read it — forever altering democracy as we know it.

In this future, journalists might still report on events, but it will be AI that will take these inputs, inject data from its vast historical repositories and formulate a multitude of different themes, each making different arguments and coming to different conclusions. Then, using data about readers’ interests learned from their social media, online shopping and browsing history, AI will present them with the version of the news they would like to read.

For example, for a reader with strong views on the environment, news of heavy flooding in some place of interest might be presented from a global warming standpoint, with conclusions about how human activity has hurt the environment. For another with views against climate change, the same story might be presented with data and conclusions questioning the validity of weather science.

Stories might be presented in brief, for readers who like to peruse the news, or in-depth, for those who like to delve into details. It may even have actionable links to online stores selling essential supplies for those in the flood zone or social media links connecting readers with others who share their interests. In essence, it will be the perfect AI-created echo chamber — where each person will be an audience of one, connected to others who are always agreeable.

This hyper personalized, AI-driven reality is closer than people realize — and it goes beyond the Olympic or election coverage I mentioned. After his purchase of the Washington Post, Jeff Bezos introduced Heliograf, an AI-based writing tool, which given predefined themes and phrases can write complete articles. This software, while still far from autonomous, has already authored about 850 articles that have cumulatively garnered half a million page views.

Others like The New York Times, the Associated Press and many financial organizations are also testing and utilizing similar software for everything from local news reporting to financial report writing. Just consider this AP story on a Maryland-based company’s third-quarter results, written by AI.

Furthermore, thanks to Google, Facebook, Amazon and other online services tracking virtually every aspect of people’s online and even offline behaviors, we already have deep data on almost every American’s personal opinions and preferences — which these companies already use to target and position advertisements. All that’s missing is for one media organization to combine these processes.

And there is nothing to stop a company, especially one such as Amazon or even Apple, from doing it. After all, it would create the perfectly “sticky website,” where people, content and products are precisely matched — an advertiser’s dream come true.

Besides, there is no policy or law that prohibits any of this — none whatsoever prescribing that the news must be authored by people. And news consumers would love such personalized news. After all, close to the majority of news consumers, both right- and left-leaning, not only prefer to hear political views in line with their own thinking on social media, but they also tend to block or defriend people who disagree with their avowed political views.

The majority of news consumers also “happen upon the news” online rather passively, often while doing something else. They usually follow the same few news sources rather than looking for another source to reconfirm what they are presented, let alone get a different perspective.

So the audience preference for an AI-driven, single news website that targets them with hyper-personalized content is already here, policies prohibiting it are absent and the technology for it is almost ready. In other words, this media future is primed for disruption.

A win-win for marketers, advertisers and readers — but a giant loss for democracy as we know it, because it will take away the core of what makes democracies successful: well-informed citizens, who form opinions not by simply reading articles they agree with, but by examining that which they don’t agree with — and then finding common ground.

However, we can save this critical part of our democracy through forward thinking policy, media self-policing and a bit of introspection.

More specifically, first, when it comes communication technology, policy making tends to be highly reactive. Right from the days of the Radio Act of 1912, which was a reaction to the sinking of the Titanic and eventually led to the creation of the Federal Communications Commission, to all the many congressional hearings after the Russian interference in our elections, we have reactively dealt with the media. What we need instead is to proactively address what we know is more than likely.

The problem with AI is not only that it will do things faster or better than human journalists, but it is also that we will trust it implicitly. We already see this trend with court systems across the nation using AI-based programs for deciding what punishment is meted out to people convicted of crimes without fully examining the underlying computational algorithms governing the programs.

Likewise, the AI-generated news of the future will likely be considered more trustworthy, unless policies are enacted that limit the extent to which algorithms can access audience profile data — thereby reducing the ability for the media to target each reader with their own version of “alternative facts.”

Second, the news media needs to act responsibly and self-police. With the many articles already being generated and matched to readers by AI, news sites need to start providing indicators of how such content matching was done, what parts of the content was authored by AI and, in the future, how many different versions of the story were created. This would help readers make up their own minds about the credibility of what they read.

Finally, the reading public has the largest responsibility. What our recent presidential election has taught us is that it’s not simply the availability of the media, the presence of competing content or even its accessibility. It is human agency. In other words, we the people have to actively seek information — some that is agreeable, a lot that it not; some that’s online, and others that come from discussions with people who disagree with us — and form our informed views. And that’s something tomorrow’s AI could well take away from us.

 

 

*A version of this post appeared in CNN under the title: When AI writes your news, what happens to democracy?

AI will replace trucker, retail workers, journalists–and you and I [Published in CNN]

Amazon Go, the online retailer’s first completely automated store, debuted in Seattle last week. Using a bevy of smart cameras, deep machine learning and artificial intelligence (AI) algorithms, the store makes it possible for shoppers to simply pick up the products they like and go, with their accounts being automatically charged for the products — completely eliminating the need for cashiers and checkout lines. Though staff members still stock the shelves, they too will likely soon be replaced by robots.

This is revolutionary and will likely be how all stores will operate in the near future. Stores won’t have to invest in employees — salaries, training, overtime, health care. Customers will like it, too. No more standing in boring check-out lines, interacting with indifferent staff.

What we are witnessing is surely the future of the retail industry, but there is also a downside that needs our attention. Cashiers and retail workers are two of the most common occupations in the US, employing roughly 8 million people, many of who tend to be younger, white women, making modest yearly incomes in the $20,000-$25,000 range.

Most of these jobs require little formal education for entry, and so the sector supports many individuals with relatively low skills and education who are likely to find it particularly hard to quickly retool and fit a different employment sector. Most of them will likely find themselves jobless.

Of course, this isn’t the only sector that AI will decimate. Driverless trucks are already being tested on major highways. They, too, have many advantages over today’s long haulers: they can run 24/7 and never get fatigued; no need for mandatory breaks; no more wasted fuel idling overnight.

Truck drivers account for a third of the cost of this $700 billion industry, and there are over 1 million mostly middle-aged, white male truckers in the US. Their jobs will be rendered obsolete. And these numbers will likely be even higher once driverless cars replace all taxi and local delivery drivers.

Such fears of computing-led obsolescence aren’t new. In 1964, less than a few years after IBM had launched the first solid-state mainframe computer, “The Twilight Zone” ran a skit titled “The Brain Center at Whipple’s” — where Mr. Whipple, the owner of a vast manufacturing corporation replaced all his factory workers with a room-sized computing machine.

Mr. Whipple’s economic justification for his “X109B14 modified transistorized totally automated machine” could just as well be applied to AI: “It costs 2 cents an hour to run … it lasts indefinitely … it gets no wrinkles, no arthritis, no blocked arteries … two of them replace 114 men who take no coffee breaks, no sick leaves, no vacations with pay.” In the show, Whipple’s machine quickly replaced everyone from the plant’s workers to its foremen to all the secretaries.

The story was prescient and many of its fictionalized fears in time came true: Most of the large manufacturing plants were indeed shut down; secretaries and typists mostly became obsolete; and the jobs that created the American middle-class were all eventually outsourced. Much of this computer-driven automation replaced low-skilled easily routinizable functions.

But AI is different. It utilizes deep-learning algorithms and acquires skills, so it can routinize many complex functions.

Take journalism — a task that has always been performed by humans. After its purchase of The Washington Post last year, Amazon tested Heliograf, a new AI based writing program that automates report-writing using predefined narrative templates and phrases. From the Olympics to the elections, the software has already auto-published close to 1,000 articles.

And given its ability to churn through virtually any amount of data and spit out endless reports instantaneously, AI newsbots are way better than humans. It’s no surprise then that USA Today, Reuters, BuzzFeed and growing numbers of financial organizations are already employing AI for tasks ranging from reporting to data authentication.

In the near future, AI will replace many other such so-called highly skilled professions, from chefs to pilots and surgeons. Going back to school, learning new skills and retooling might not be an option because it would be impossible to learn as quickly, provide the kind the nuance from distilling terabytes of information or outpace AI. And besides, by the human-time it takes to acquire a new skill, AI might have learned to replace it.

If these trends materialize — and some might not — we are looking at a seismic shift in the American economy. If the last election was a push back against globalization, imagine what a rage against AI will look like.

The solution, of course, is not to stop the march of progress but to prepare for it with forward thinking investments in education, human capital and public policy. While Washington is busy cleaning up yesterday’s self-inflicted mess, this is tomorrow’s crisis that requires attention today.

In the end of the Mr. Whipple skit, he, too, was rendered obsolete — by a robot. Rod Serling’s ominous closing message: “Man becomes clever instead of becoming wise; he becomes inventive and not thoughtful; and sometimes, as in the case of Mr. Whipple, he can create himself right out of existence.” One hopes that this isn’t what AI does to us.

 

*A version of this post appeared in CNN with the title “With AI we may have created ourselves out of existence”

It’s not just fake news, Facebook, or Twitter! It’s the Internet’s Dark Triad we should be worried about. [Published in CSO Online]

Thanks to the ongoing Senate hearings on election hacking we are learning about how the Russians interfered with our presidential elections by sponsoring numerous fake social media accounts and even placing advertisements on Facebook, YouTube and Google that targeted people with interest on divisive issues.

But while policy makers are rightfully angered by these platforms’ inability to curb these attacks proactively, it is important to recognize that Facebook, Google, and even some web hosting services were mere vehicles providing a convenient platform for what was a much larger propaganda process made possible by the Internet’s Dark Triad: spearphishing, trolling, and fake news.

It is this trifecta that Vladimir Putin used to interfere with our elections as well elections in Germany and other parts of Europe. And it is this triad that we need to understand and stop.

At the tip of this triad is spearphishing—malware-laden email attachments and hyperlinks that when clicked provide the hacker backdoor access into an individual’s computers and networks. Every major attack from the Chinese military led theft of our F35 spy plane blueprints, to the infamous North Korea-led hack into Sony Pictures, to the Russian hacks into the DNC computers during our elections employed spearphishing. In fact, spearphishing attacks are so easy to craft that the Russians used the help of a 15-year old Canadian-Khazak citizen to conduct the attacks.

Anchoring the other end of the triad is organized trolling campaigns. What started with PR firms attempting to “manage” consumer reviews got co-opted by nation states to hijack online conversations by flooding message boards with vitriolic comments and counter-narratives. Confessions from “professional” trolls in Russia and investigative reports by the NYT’s Adrian Chen show how Russia’s state-sponsored Internet Research Agency orchestrates campaigns using phony social media profiles, interconnected networks of fake friends, even faked LiveJournal blogs for the profiles.

The final dark anchor is “fake news”—the latest form of online propaganda aimed at distorting information and spreading contrarian, even speculative views as real news. Enabling this phenomenon are some of the same phony social media profiles used for trolling along with pseudo “news” websites with seemingly credible names like The Conservative Frontline or The American Patriots, with a presence on multiple social media channels, many directly linked to Russian propaganda channels, providing the critical mass for a story to get noticed.

And as the stories are discussed by various groups the lies get crowd-sourced—arguments are strengthened, connections created, facts added—and quickly the fake news morphs into another more sensational story, spinning further news cycles. Some fake news and trolling campaigns link back to phishing websites, leading to still more breaches and even more fake news.

This was how the Russians influenced our elections. By hacking DNC emails, leaking it via WikiLeaks, and then seeding divisive political arguments, counter narratives, and conspiracy theories through fake news websites and trolling campaigns—such as pointing to the murder of DNC staffer Seth Rich in 2016 as evidence of his involvement in the hack—the Russians made many among us question our democratic processes that ultimately influenced the elections.

Unfortunately, our collective focus today is on organizations like Facebook and Twitter, who have reacted by creating task forces that curate internal lists of fake profiles and identify fake news feeds. Others like Snopes.com, Factcheck.org, and the BBC have likewise developed internal task forces that curate lists of fake news and sites. But these initiatives only address small parts of the triad—its trees—and does nothing to stop the forest that is the triad from propagating using a different platform during the next election cycle.

What we need instead is a mechanism to stop the triad completely.

And this can be done because the triad has an Achilles: it is highly coordinated. Attacks usually reuse the same, finite set of social media profiles, web domains, fake news websites, email accounts, and even malware. In fact, the reuse of email profiles and malware signatures was our basis for identifying the source of the DNC hack as being Russian intelligence.

We can thus stop the triad if we develop mechanisms to track such coordination. But this will require a unification of efforts on our end, not the diversified approaches currently in place.

This must begin by the development of a centralized breach reporting system where individuals and organizations can report suspected spearphishing attacks and get remedial help. Such a system could help track attacks and serve as an early warning system to other organizations, who can take effective counter measures to stop further breaches.

A similar mechanism could help stop organized trolling and the propagation of fake news. Rather than the internal policing efforts now being done covertly within social media organizations, what we need is a centralized repository—a WikiFacts page of sorts— where fake profiles, news, and suspicious data from different media websites are continuously reported, flagged, and publicly displayed. This information can be populated by social media organizations, search engines, as well as by user reports. Such a system would directly benefit the general public, who can report and review suspicious information; it can also help smaller media organizations who could directly use this intelligence to forestall any misuse of their platforms.

The Dark triad is a dystopian version of the game of telephone played online using hacked information and fake news. Ironically, the origins of this game can be traced to a medieval game in which players wrote stories that got increasingly distorted as people passed it along—a game called Russian Scandal. Only this scandal is for real.

Is the new iPhone designed for cybersafety? [Published in The Conversation]

As eager customers meet the new iPhone, they’ll explore the latest installment in Apple’s decade-long drive to make sleeker and sexier phones. But to me as a scholar of cybersecurity, these revolutionary innovations have not come without compromises.

Early iPhones literally put the “smart” in the smartphone, connecting texting, internet connectivity and telephone capabilities in one intuitive device. But many of Apple’s decisions about the iPhone were driven by design – including wanting to be different or to make things simpler – rather than for practical reasons.

Many of these innovations – some starting in the very first iPhone – became standards that other device makers eventually followed. And while Apple has steadily strengthened the encryption of the data on its phones, other developments have made people less safe and secure.

The lights went out

Among Apple’s earliest design decisions was to exclude an incoming email indicator light – the little blinking LED that was common in many smartphones in 2007. LEDs could be programmed to flash differently, even using different colors to indicate whom an incoming email was from. That made it possible for people to be alerted to new messages – and decide whether to ignore them or respond – from afar.

Its absence meant that the only way for users of the iPhone to know of unread messages was by interacting with the phone’s screen – which many people now do countless times each day, in hopes of seeing a new email or other notification message. In psychology, we call this a “variable reinforcement mechanism” – when rewards are received at unpredictable intervals – which is the basis for how slot machines in Las Vegas keep someone playing.

This new distraction has complicated social interactions and makes people physically less safe, causing both distracted driving and even inattentive walking.

Email loses its head, literally

Another problem with iOS Mail is a major design flaw: It does not display full email headers – the part of each message that tells users where the email is coming from. These can be viewed on all computer-based email programs – and shortened versions are available on Android email programs.

Cybersecurity awareness trainers regularly tell users to always review header data to assess an email’s legitimacy. But this information is completely unavailable on Apple iOS Mail – meaning even if you suspect a spear-phishing email, there is really no way to detect it – which is another reason that more people fall victim to spear-phishing attacks on their phones than on their computers.

Safari gets dangerous

The iOS web browser is another casualty of iOS’s minimalism, because Apple designers removed important security indicators. For instance, all encrypted websites – where the URL displays that little lock icon next to the website’s name – possess an encryption certificate. This certificate helps verify the true identity of a webpage and can be viewed on all desktop computer browsers by simply clicking on the lock icon. It can also be viewed on the Google Chrome browser for iOS by simply tapping on the lock icon.

But there is no way to view the certificate using the iPhone’s Safari – meaning if a webpage appears suspicious, there is no way to verify its authenticity.

Everyone knows where you stand

A major iPhone innovation – building in high-quality front and back cameras and photo-sharing capabilities – has completely changed how people capture and display their memories and helped drive the rise of social media. But the iPhone’s camera captures more than just selfies.

The iPhone defaults to including in each image file metadata with the date, time and location details – latitude and longitude – where the photo was taken. Most users remain unaware that most online services include this information in posted pictures – making it possible for anyone to know exactly where the photograph someone just shared was taken. A criminal could use that information to find out when a person is not at home and burglarize the place then, as the infamous Hollywood “Bling Ring” did with social media posts.

In the 10 years since the first iPhone arrived, cyberattacks have evolved and the cybersecurity stakes are higher for individuals. The main concern used to be viruses targeting corporate networks; now the biggest problem is attackers targeting users directly using spear-phishing emails and spoofed websites.

Today, unsafe decisions are far easier to make on your phone than on your computer. And more people now use their phones for doing more things than ever before. Making phones slimmer, shinier and sexier is great. But making sure every user can make cybersafe decisions is yet to be “Designed by Apple.” Here’s hoping the next iPhone does that.

You are the key to keeping your computer safe [Published in CNN]

Yet another major cyber extortion campaign is wrecking computer networks all over the world — and we need to start thinking about cyber safety more comprehensively and include users in solving the problem. This effort must begin with an assessment of user risk, not just technical risk — because all signs indicate that there is still worse to come.

This latest attack is a metastasized version of WannaCry, the ransomware attack that within a few days in May ripped through over 3 million computers in 150 countries and spread faster than most highly contagious diseases ever have. Ransomware is a class of malware that encrypts all the files on a computer, letting go of them only when a ransom is paid to the hacker who has the encryption key. This new attack has already hurt major organizations in Russia, Europe, Asia and North America. Thankfully, its spread also appears to have stalled, partly because many users have already installed software patches after WannaCry.

Every major cyberattack in recent years has led to even bigger attacks because hackers learn and evolve. The Sony Pictures email breach was followed by the Ashley Madison leak of user credentials, which eventually spurred the 1 billion plus username heist at Yahoo. Likewise, last October, we saw the record-setting distributed denial-of-service attack that hijacked thousands of Internet-of-Things (IoT) devices — everything from Internet enabled home cameras and DVRs to baby monitors — and targeted the Internet’s leading directory service provider Dyn, making large parts of the Internet inaccessible all over the world. This is a likely reason we are seeing ransomware attacks, which for most of 2016 were targeting individual organizations, now turning into distributed global attacks that spread from organization to organization.
In addition, hacking is becoming a more readily available and easily deployed weapon. Hackers are in high demand, some even work for nation states with deep pockets. Thanks to this, the malware being developed and available for rent has become even more sophisticated. Exacerbating this problem is the fact that hackers are also co-opting tools developed by the NSAand CIA with enormous capabilities, which are available on the dark web.
Ultimately, we have yet to make a dent in tackling the single biggest problem in cybersecurity: users. From not installing software patches or conducting routine updates to clicking on malicious hyperlinks and attachments in spear-phishing emails, and using weak passwords on devices, regular people — all of us computer users — continue to be the conduits for most cyberattacks.
And so far, the only proactive approach against this continues to be security awareness training, which people usually only get when they’re affiliated with larger organizations. Not only does this leave everyone else, from small businesses to senior citizens, vulnerable, but all signs also point to a limited impact of such training even within big organizations.
As a case in point, a team at the technology website Gizmodo recently sent an obviously fake spear-phishing email to 15 people associated with the Trump administration. According to Gizmodo, more than half the targeted individuals clicked on the hyperlink in the email, with former FBI Director James Comey and Donald Trump adviser Newt Gingrich even responding to the email. Keep in mind that hackers need just one errant user to inadvertently click on a spear-phishing email or leave a computer unpatched to start a cyberpandemic.
In spite of this staggering vulnerability, almost all policymaking ignores end-users and abandons the opportunity to transform users from conduits into a defense mechanism against cyberattacks.
Take President Trump’s Executive Order on Cybersecurity that was released last month. The order continues many of the Obama era policies and creates accountability by holding the heads of federal agencies responsible for breaches, enforces discipline by mandating adherence to the National Institute of Standards and Technology, or NIST, cybersafety framework, and requires all federal agencies provide a risk assessment report that details their cybersecurity readiness.
But nowhere is there any mention of end-users, the single biggest risk to cybersecurity. Ignoring the end-user is akin to putting better locks on a safe, while forgetting all the many people who have its keys. In other words, it is a huge problem.
One potential solution is for federal government to task NIST, which is currently focused on technical risk, toward building a user cyber risk assessment framework that takes into account how people work, what devices they use, and their thoughts, habits and online behaviors. This will not only help understand the weakness among users, but it also helps accurately assess risk and build safeguards.
We also need to start a nationwide campaign to inform and educate everyone, from homemakers to high school students, about cyber risks. With malware capable of being ported into work from shared computers and even from travelers using free Wi-Fi terminals, it is imperative that everyone is secure. Such a campaign requires federal and state funding and local support that is aimed at empowering users to develop good cyber hygiene. This includes teaching people how to be safe online, how to use online privacy protection tools, and how to monitor, detect and report cyberattacks.
Finally, we have to better engage users in reporting attacks. In 2014, I called for a centralized cyberbreach reporting system, much like the 911 emergency system we already have. Such a system is all the more important now with ransomware-type attacks, because it can serve as a point of contact for desperate victims who have merely hours to comply or face losing all their data. Additionally, with attacks spreading like contagions, having incoming reports helps law enforcement track them and could serve as an early warning system to caution others to take steps. But this again requires federal, state and local government to recognize the need for enlisting and protecting people.

WannaCry was only stopped because a vigilant researcher discovered a critical weakness in its code. The current attack, at least for now, has also stalled because the email account used by the hackers to manage their ransom demands was blocked by the email provider. But it’s almost certain that a hacker somewhere is already developing a workaround. The next attack will surely be bigger, bolder and more consequential. And the next time we might not be so lucky.

A version of this post appeared on CNN: https://www.cnn.com/2017/06/28/opinions/ransomware-attacks-will-get-worse-vishwanath-opinion/index.html

How to protect the Internet [Published in CNN]

FBI Director James Comey was on Capitol Hill with other intelligence leaders on Tuesday to testify over the various email breaches of Democratic National Committee computers during the election campaign.

It is unclear whether the testimony — or the analysis by 17 civilian and military intelligence agencies that all point to the role of Russia in the breaches — will persuade President-elect Donald Trump to accept the evidence of what happened. But whatever his response, we can only hope that he accepts another troubling reality: The Internet as we know it is under threat.
Last month, the Department for Homeland Security and the FBI released their joint analysis report with technical details on how the hackers affected the email breaches. All the attacks used spear-phishing, where the hackers sent a legitimate looking email with hyperlinks or attachments that when clicked launched malware that opened a backdoor into the victim’s computer or directed the victim to a fake web page that solicited login and password credentials.
I began writing in the media about the dangers of spear-phishing two years ago, in the immediate aftermath of the infamous Sony Pictures breach. My goal was to shift attention away from the salacious inside-Hollywood gossip in the emails released by the hackers and encourage people to instead focus on how the hackers had accomplished this. I also hoped to draw more attention to what it meant for the future of the Internet, in the hope that policymakers and organizations would wake up to this threat.
Then came news of the massive OPM breach, the Excellus BlueCross BlueShield data breach, infrastructure hacks by Iranians, and a steady, continuing stream of ransomware attacks — all using spear-phishing. Each attack seemed to inspire more, upping the ante. And with the DNC hack, spear-phishing has now struck a blow at the very foundation of our democratic process: our system of fair elections.
But worse is likely to come.
The reality is that spear-phishing attacks are easy to craft and many users, even after being trained in spotting spear-phishing, continue to fall victim. A case in point was a simulated spear-phishing attack my research team recently conducted over three days in a large financial company whose chief technical officer opted to participate in our study. That simulated attack netted close to a 55% success rate (in which someone actually clicked on the “malicious” hyperlink in the email) within a few hours of the attack. Reminders sent on the next two days kept netting more victims, with the overall attack realizing close to an 80% victimization rate.
This was despite the fact that the employees targeted were trained in spotting such attacks and almost all reported high confidence in their ability to detect a phish. Such findings are common in cybersecurity research, and particularly sobering because, as we witnessed during the elections, a single victim can cause a massive compromise.
However, suggesting that people stop using email for anything important — as the President-elect did — is not a solution. Not when the very engine of all communication today is email. It is the reason the Internet became popular. Instead of discouraging the use of email, the Trump administration should instead work on helping to save the Internet by encouraging people to take steps to limit the threat of spear-phishing.
A good start would be encouraging more be done to plug the single biggest weakness in email: its system of authentication using logins and password. Email and most online services use this simple mechanism to assess who should be provided access to an account. But these credentials are also easily stolen and reused, which is why they are the primary target of most hackers.
A technical solution for this already exists in two-factor authentication, or 2FA. This is when an additional numeric code is sent to a separate device possessed by the user that has to be entered along with the user login and password. Because login credentials by themselves are of little value without this additional pin, 2FA makes it much harder for hackers to compromise an account. When properly enabled and used, 2FA works like automobile seat belts; it cannot guarantee complete safety, but it sure can significantly minimize risk.
The overall adoption of 2FA, however, remains low because many organizations still don’t provide support for it, partly because there is no requirement to do so. Among the organizations that do, it is often left to users — many of whom are oblivious to 2FA — to enable it.
Here is where and how President Trump can help. Just like federal law requires automobiles to be fitted with seat belts, Trump could push for legislation that makes it mandatory for all online services that acquire user credentials to support 2FA. Furthermore, legislation could be enacted that makes it such that organizations that enable it by default receive liability protections from any breaches that occur due to a credential theft. This would incentivize organizations to adopt the technology and share the responsibility for its use with consumers.
Finally, user education is necessary. While many users are unaware of 2FA, others use it on a few services, often only when they initially log on to a system. 2FA has limited efficacy if a hacker accesses an authenticated computer with already open, active sessions. And it has even less value if only some users adopt it, while others don’t, providing an alternative conduit for the hacker. At the user level, the other biggest complaint remains the few seconds 2FA use adds to the start of an online session. But, as we have now realized, these few seconds could dictate the future of an organization — or influence the outcome of an election. All netizens must, therefore, be educated on the proper use of 2FA, and this requires federal grants for research and training.
Rather than building real walls to protect against imaginary threats, President Trump should work to build a virtual wall to protect our Internet. Support for 2FA is a necessary building block to make that a reality.

A version of this post appeared here: https://www.cnn.com/2017/01/10/opinions/how-to-protect-the-internet-vishwanath/index.html

“Spear-Phishing” Roiled the Presidential Campaign—Here’s How to Protect Yourself [Published in The Conversation]

Never in American political history have hacked and stolen emails played such a central role in a presidential campaign. But hackers are likely to target you as well—though perhaps with smaller repercussions for the world as a whole.

Every one of October’s surprises, from the leaks of Clinton campaign chairman John Podesta’s purported emails to those of the Democratic National Committee, was achieved using a surprisingly simple email deception technique called “spearphishing.” The same technique was used to attack Hillary Clinton’s private email serverTwo spearphishing messages were found on it.

Many people know that the term “spearphishing” typically describes emails trying to get someone to click on a link to, say, their online bank account—but actually sending them to a lookalike site where their login information can be stolen. Some others hide malicious software (or “malware”) within links or in attachments that when clicked give the attacker control of the system or even an entire corporate network.

But despite years of national efforts to promote cybersecurity, spearphishing remains fruitful: People are still the weakest links in cybersecurity defenses. There are, however, simple ways we can all step up to protect our own information—whether we’re central to presidential politics or regular people.

A MASSIVELY COMPLEX PROBLEM

In general, people are fairly aware of the potential for cyberattacks. Some are even good at spotting them. In fact, both Podesta and Clinton were suspicious of the phishing emails they received. Before clicking, Podesta even asked his tech-support staff if a link was legitimate. Those experts should have known how to spot a phishing attack, but failed: They told him to click on the malicious link.

The problem is not lack of awareness or even knowledge, though some of us need more of that too. It’s actually one of complexity.

Researchers think of computer users as working on an email while focused solely on a computer screen. But reality paints a different picture. Today, people use a variety of internet-connected gadgets and apps, with myriad prompts, feeds and notifications, all vying for their attention.

Estimates are that the average person checks his smartphone 80 to 100 times each day. This does not even include desktop and laptop computer screens, tablets or smartwatches. People routinely use all of those devices as well, checking, recording, reviewing and responding to requests in the office and on the go—walking, talking and even driving.

These interactions present a near-constant stream of information and requests. The user typically feels that he has just seconds to consider each—even though any one of them could define the fate of an entire organization or a political campaign.

A VERY SIMPLE SOLUTION

In the face of all this complexity, the best answer is a very simple one: a checklist.

Atul Gawande, in his book “The Checklist Manifesto: How to Get Things Right,” details the importance of checklists in highly specialized fields. These are work environments where success depends on coordination between a number of trained professionals—airline pilots, surgical teams, construction engineers. Often, trained people remember to do complex tasks, like medical professionals performing difficult surgical procedures, but forget to do simple things, like washing hands prior to surgery.

Much like in cybersecurity, the problem is one of complexity and human error, with potentially severe consequences. For instance, one in every 200 medical errors involves performing the wrong procedure, or even working on the wrong patient. That’s where a checklist comes in, reminding the medical staff to reconfirm the patient’s name and visibly mark the correct surgical site.

In much the same way, a checklist could help us routinize the minimum actions necessary for achieving cybersafety. With this goal in mind, here is a checklist of five best practices that could help protect us online.

FIVE STEPS TO MORE SECURE ONLINE OPERATIONS

  1. Enable two-factor authentication (2FA). Most major online services, from Amazon to Apple, today support 2FA. When it’s set up, the system asks for a login and password just like usual—but then sends a unique numeric code to another device, using text message, email or a specialized app. Without access to that other device, the login is refused. That makes it much harder to hack into someone’s account—but users have to enable it themselves.
  2. Encrypt your internet traffic. A virtual private network (VPN) service encrypts digital communications, making it hard for hackers to intercept them. Everyone should subscribe to a VPN service, some of which are free, and use it whenever connecting a device to a public or unknown Wi-Fi network.
  3. Tighten up your password security. This is easier than it sounds, and the danger is real: Hackers often steal a login and password from one site and try to use it on others. To make it simple to generate—and remember—long, strong and unique passwords, subscribe to a reputable password manager that suggests strong passwords and stores them in an encrypted file on your own computer.
  4. Monitor your devices’ behind-the-scenes activities. Many computer programs and mobile apps keep running even when they are not actively in use. Most computers, phones and tablets have a built-in activity monitor that lets users see the device’s memory use and network traffic in real time. You can see which apps are sending and receiving internet data, for example. If you see something happening that shouldn’t be, the activity monitor will also let you close the offending program completely.
  5. Never open hyperlinks or attachments in any emails that are suspicious. Even when they appear to come from a friend or coworker, use extreme caution—their email address might have been compromised by someone trying to attack you. When in doubt, call the person or company directly to check first—and do so using an official number, never the phone number listed in the email.

Even using this checklist can’t guarantee stopping every attack or preventing every breach. But following these steps will make it significantly harder for hackers to succeed. And it will help us all develop security consciousness and ultimately better cyberhygiene. Our leaders could certainly use the help.

A version of this post appeared here and in other leading media: https://www.scientificamerican.com/article/ldquo-spear-phishing-rdquo-roiled-the-presidential-campaign-mdash-here-rsquo-s-how-to-protect-yourself/#

Cyber security: It’s not just about Yahoo [Published in CNN]

It’s not surprising that some Yahoo users have decided to sue the company for negligence over a 2014 breach that was only recently discovered and announced. But before we blame Yahoo for this, we need to understand how hackers accomplish such breaches — and what all of us should be doing better to prevent such breaches.

The reality is that all of us — individuals, businesses and policy makers — have a role to play in keeping us safe, whether it be engaging in better cyber safety, or passing regulations that ensure the public is notified of breaches so we can respond in a timely fashion.
Hackers wage a sort of asymmetric warfare. Instead of trying to circumvent sophisticated organizational firewalls, most go after soft targets — the employees and customers of the organization. Many use simple spear phishing attacks with hyperlinks that launch spoofed web pages that directly solicit user logins or hide malware in email attachments that provide backdoor access into the organization’s networks. Such attacks are enormously successful, securing victimization rates of close to 30% in some cases — a sobering statistic when one considers that the hacker needs just one victim. Other attacks, such as the hack into the U.K.’s ISP TalkTalk — exploit weaknesses in web forms and access the databases that run behind web pages. Such access is even easier when the hacker has procured the website administrator’s login through spear phishing.
Making all this worse is that hackers using stolen credentials are hard to detect because they appear similar to an employee making legitimate requests. Many lurk in computer network for months, move laterally looking for weaknesses and slowly exfiltrate data to avoid detection. This is likely why it took Yahooalmost two years to discover the breach. And they are not alone. Unfortunately, it takes on average 200 or more days to discover a breach. And although companies are spending more on technological firewalls and employee training, most breaches continue to only be discovered accidentally, when an employee chances on something amiss or, as in the Yahoo case, when the hacker puts the data up for sale.
This gap also makes remediation challenging because knowledge of the breach comes long after the information has been used to victimize users. Meanwhile, organizations are reluctant to admit to breaches because of the negative media attention they receive.
And here’s where Yahoo could have done more: there is speculation they may have learned of the breach in early August. If we hope to stop this, we must begin by realizing that no single company or technological “silver-bullet” can stop a breach. Instead, all of us must work together.
What does that mean in practice?
First, organizations who are the targets of attacks must take the lead by adopting best practices that make it harder for a hacker to enter and move within networks. This need not mean complex, expensive fixes, but simple strategies like the ones outlined by the NSA in its recently published Methodology for Adversary Obstruction. These include policies such as ensuring that administrator accounts do not have Internet access so that sensitive credentials cannot be stolen through spear phishing; using different passwords for users and administrators so hackers cannot move across the network; enforcing multi-factor authentication, which means an additional PIN is sent to another device that needs to be entered and “salting” (adding random data) and encrypting all stored credentials so that passwords are uncrackable even when stolen.
But it is not just up to organizations — every one of us needs to do our bit. This must start with checking if our credentials have been compromised on sites like “Have I been Pwned,” which log stolen credentials, and changing those logins right away. Each of us must work on developing better cyber safety: learning to deal with spear phishing emails; enabling multi-factor authentication where available; using strong, unique passwords and using password-storage vaults; and learning to actively monitor our own devices for suspicious activity so that compromises cannot make their way from our devices to our organization’s.
Finally, policy makers must focus on improving the breach remediation processes. While most states have passed breach notification laws, policies on breach remediation remain open-ended. Simply notifying people or asking victims to change their passwords, as Yahoo just did, or providing people credit protection as Target and others did, does little to contain the damage to one’s reputation stemming from an information leak. Imagine the stigma if the health records of the 80 million victims of the Anthem breach were ever released. Once released, this information becomes available on searchable databases, victimizing people forever. Here, the EU has been more proactive and ruled in favor of a right to be forgotten online, making it possible for EU citizens to prohibit their personal information from appearing on online searches. Perhaps it’s time we considered this, too.
At the end of day, hackers are not after LinkedIn or Yahoo’s data — they are after ours. That means it is our collective responsibility to help protect that data.

A version of this post appeared on CNN: https://www.cnn.com/2016/09/30/opinions/yahoo-data-breach-vishwanath/