MonthSeptember 2018

Why do so many people fall for fake profiles online? [Published in The Conversation]

The first step in conducting online propaganda efforts and misinformation campaigns is almost always a fake social media profile. Phony profiles for nonexistent people worm their way into the social networks of real people, where they can spread their falsehoods. But neither social media companies nor technological innovations offer reliable ways to identify and remove social media profiles that don’t represent actual authentic people.It might sound positive that over six months in late 2017 and early 2018, Facebook detected and suspended some 1.3 billion fake accounts. But an estimated 3 to 4 percent of accounts that remain, or approximately 66 million to 88 million profiles, are also fake but haven’t yet been detected. Likewise, estimates are that 9 to 15 percent of Twitter ‘s 336 million accounts are fake.

Fake profiles aren’t just on Facebook and Twitter, and they’re not only targeting people in the U.S. In December 2017, but German intelligence officials also warned that Chinese agents using fake LinkedIn profiles were targeting more than 10,000 German government employees. And in mid-August, the Israeli military reported that Hamas was using fake profiles on Facebook, Instagram and WhatsApp to entrap Israeli soldiers into downloading malicious software.

Although social media companies have begun hiring more people and using artificial intelligence to detect fake profiles, that won’t be enough to review every profile in time to stop their misuse. As my research explores, the problem isn’t actually that people – and algorithms – create fake profiles online. What’s really wrong is that other people fall for them.

My research into why so many users have trouble spotting fake profiles has identified some ways people could get better at identifying phony accounts – and highlights some places technology companies could help.

People fall for fake profiles

To understand social media users’ thought processes, I created fake profiles on Facebook and sent out friend requests to 141 students in a large university. Each of the fake profiles varied in some way – such as having many or few fake friends, or whether there was a profile photo. The idea was to figure out whether one or another type of profile was most successful in getting accepted as a connection by real users – and then surveying the hoodwinked people to find out how it happened.

I found that only 30 percent of the targeted people rejected the request from a fake person. When surveyed two weeks later, 52 percent of users were still considering approving the request. Nearly one in five – 18 percent – had accepted the request right away. Of those who accepted it, 15 percent had responded to inquiries from the fake profile with personal information such as their home address, their student identification number, and their availability for a part-time internship. Another 40 percent of them were considering revealing private data.

But why?

When I interviewed the real people my fake profiles had targeted, the most important thing I found was that users fundamentally believe there is a person behind each profile. People told me they had thought the profile belonged to someone they knew, or possibly someone a friend knew. Not one person ever suspected the profile was a complete fabrication, expressly created to deceive them. Mistakenly thinking each friend request has come from a real person may cause people to accept friend requests simply to be polite and not hurt someone else’s feelings – even if they’re not sure they know the person.

In addition, almost all social media users decide whether to accept a connection based on a few key elements in the requester’s profile – chiefly how many friends the person has and how many mutual connections there are. I found that people who already have many connections are even less discerning, approving almost every request that comes in. So even a brand-new profile nets some victims. And with every new connection, the fake profile appears more realistic and has more mutual friends with others. This cascade of victims is how fake profiles acquire legitimacy and become widespread.

The spread can be fast because most social media sites are designed to keep users coming back, habitually checking notifications and responding immediately to connection requests. That tendency is even more pronounced on smartphones – which may explain why users accessing social media on smartphones are significantly more likely to accept fake profile requests than desktop or laptop computer users.

Illusions of safety

And users may think they’re safer than they actually are, wrongly assuming that a platform’s privacy settings will protect them from fake profiles. For instance, many users told me they believe that Facebook’s controls for granting differing access to friends versus others also protect them from fakers. Likewise, many LinkedIn users also told me they believe that because they post only professional information, the potential consequences for accepting rogue connections on it are limited.

But that’s a flawed assumption: Hackers can use any information gleaned from any platform. For instance, simply knowing on LinkedIn that someone is working at some business helps them craft emails to the person or others at the company. Furthermore, users who carelessly accept requests assuming their privacy controls protect them imperil other connections who haven’t set their controls as high.

Seeking solutions

Using social media safely means learning how to spot fake profiles and use privacy settings properly. There are numerous online sources for advice – including platforms’ own help pages. But too often it’s left to users to inform themselves, usually after they’ve already become victims of a social media scam – which always begins with accepting a fake request.

Adults should learn – and teach children – how to examine connection requests carefully in order to protect their devices, profiles and posts from prying eyes, and themselves from being maliciously manipulated. That includes reviewing connection requests during distraction-free periods of the day and using a computer rather than a smartphone to check out potential connections. It also involves identifying which of their actual friends tend to accept almost every friend request from anyone, making them weak links in the social network.

These are places social media platform companies can help. They’re already creating mechanisms to track app usage and to pause notifications, helping people avoid being inundated or needing to constantly react. That’s a good start – but they could do more.

For instance, social media sites could show users indicators of how many of their connections are inactive for long periods, helping people purge their friend networks from time to time. They could also show which connections have suddenly acquired large numbers of friends, and which ones accept unusually high percentages of friend requests.

Social media companies need to do more to help users identify and report potentially fake profiles, augmenting their own staff and automated efforts. Social media sites also need to communicate with each other. Many fake profiles are reused across different social networks. But if Facebook blocks a faker, Twitter may not. When one site blocks a profile, it should send key information – such as the profile’s name and email address – to other platforms so they can investigate and potentially block the fraud there too.

[A version of this article appeared on The Conversation http://theconversation.com/why-do-so-many-people-fall-for-fake-profiles-online-102754.]

Spearphishing has become even more dangerous [Published in CNN]

The continued prosecution of “All the President’s Men” does little to stop the Russians from attempting to influence America’s upcoming midterm elections. And reports from Missourito Californiasuggest they are already looking for our cyber weaknesses to exploit.

Chief among these: spear phishing—emails containing hyperlinks to fake websites—that the Russians used to hack into the DNC emails and set in motion their 2016 influence campaign.

After two years of congressional hearings, indictments, and investigations, spear phishing not only continues to be the commonest attack used by hackers, but the Russians are still trying to use it against us.

The is because in the ensuing time, spear phishing has become even more virulent, thanks to the availability of sophisticated malware, some stolen from intelligence agencies; troves of people’s personal information from previous breaches; and ongoing developments in machine learning that can deep-dive into this data and craft highly effective attacks.

Just last week, Microsoft blocked six fake websitesthat were likely to be used for spear phishing the US Senate by the same Russian intelligence unit responsible for the 2016 DNC hack

But the Internet is vast and there are many more fundamental weaknesses still available for exploit.

Take the URLs with which we identify websites. Thanks to Internationalized Domain Names (IDNs)that allow websites to be registered in languages other than English, many fake websites used for spear phishing are registered using homoglyphs— characters from languages that look like English language characters. For instance, a fake domain for Amazon.com could be registered by replacing the English “a” or “o” with their Cyrillic equivalents. Such URLs are hard for people to discern visually and even email scanning programs, trained to flag words like “password” which are common in phishing emails, like the one the Russians in 2016 used to hack into Jon Podesta’s emails, can be tricked. And while many browsers prevent URLs with homoglyphs from being displayed, some like Firefox still expect users to alter their browser settings for protection.

Making things worse is the proliferation of Certification Authorities (CA), the organizations issuing digital certificates that make the lock icon and HTTPS appear next to a website’s name on browsers. While users are taught to trust these symbols, an estimated one in four phishing websites actually have HTTPS certificates. This is because some CA’s have been hacked, meaning there are many roguecertificates out there, while some others have doled out free certificates to just about anyone. For instance, one CA last year issued certificates to15000 websites with names containing some combination of the word PayPal—all for spear phishing.

Besides these, the problem of phony social media profiles, which the Russians used in 2016 for phishing, trolling and spreading fake news, remains intractable. Just last week, the Israel Defense Forces (IDF) reported a social media phishing campaign by Hamas, luring its troops to download malware using fake social media profiles on Facebook, Instagram, and Whatsapp. Also last week, Facebook, followed by Twitter, blocked profiles linked to Iranian and Russian operatives being used for spreading misinformation.

These attacks, however, reveal a critical weakness of influence campaigns: by design, they utilize overlapping profiles in multiple platforms. Yet, today, social media organizations internally police their networks and keep information in their own “walled gardens.”

A better solution would be to therefore host data on suspect profiles and pages in a unified, open-source repository, one that accepts inputs from other media organizations, security organizations, even users who find things awry. Such an approach would help detect and track coordinated social media influence campaigns—which would be of enormous value to law enforcement and even media organizations big and small, many of which get targeted using the same profiles.

A platform for this could be the Certificate Transparencyframework, where digital certificates are openly logged and verified, which has been adopted by many popular browsers and operating systems. For now, this framework only audits digital certificates but, it could be expanded to encompass domain name auditing and social media pages.

Finally, we must improve user education. Most users know little about homoglyphs and even less about how to change their browser settings to ensure against them. Furthermore, many users, after being repeatedly trained to look for HTTPS icons on websites, have come to implicitly trust them. Many even mistake such symbols to mean that a website is legitimate. Because even an encrypted site could be fraudulent, users have to be taught to be cautious, and to assess website factors ranging from the spelling used in the domain name, to the quality of information on the website, to its digital certificate and the CA who issued it. Such initiatives must be complemented with better, more uniform Internet browser design, so users do not have to tinker with settings to ensure against being phished. 

Achieving all this requires leadership, but the White House, which ordinarily would be best positioned to address them, recently fired its cybersecurity czar and eliminated the role. And when according to GAO, federal agencies have yet to address over a third of its 3000 cybersecurity recommendations, the President instead talks about developing a Space Force. Last we knew the Martians haven’t landed, but the Russians sure are probing our computer systems.

 

*A version of this post was published in CNN: https://www.cnn.com/2018/09/01/opinions/spear-phishing-has-become-even-more-dangerous-opinion-vishwanath/index.html