Are You Guilty Of The Privacy Paradox?
We’re all addicts. We rely so heavily on the apps that fill our home screens that we rarely question how much their developers actually know about us—and who they're sharing that information with. We know that our data is being harvested but most of us tacitly accept that a world without privacy is more appealing than one without the pleasures of modern technology. This is when the privacy paradox starts to raise its head.
The confusion is a simple one—the commercial engine that runs Silicon Valley and its global siblings sees the erosion of our privacy as one of its central business models: the more data they have on us, the more advertising they can sell. This is something we intellectually decry: whenever researchers or journalists ask people if they value their privacy, they almost inevitably respond with a resounding “yes”. The idea of losing our privacy smacks of dehumanisation. However, the so-called privacy paradox emerges when all of us continue to use the tech services that have now been proved to seriously undermine it.
Facebook is the ultimate example of this. When the Cambridge Analytica scandal broke last year, the social media giant was forced to admit it had failed to keep user’s data safe in any way, allowing Cambridge Analytica to harvest the personal information of millions of people's profiles without their consent, and use it for political advertising purposes. It has been described as a watershed moment in the public understanding of personal data and precipitated a massive fall in Facebook's stock price and calls for tighter regulation of tech companies' use of data.
And yet, by the start of 2019, the number of daily active users of Facebook was up globally; average revenue per user up 19 percent on last year, while overall revenue for the last quarter of 2018 was 30 percent up on the same quarter in 2017. If all those members of the public who insist their privacy is an essential human right followed through on their claim, the company would be in freefall. But even in countries such as the UK, where the Cambridge Analytica scandal made front-page news, activity rates barely changed. A proven lack of data privacy—and one that has caused major political upheaval around the globe— apparently makes no difference to our behaviour.
We are all walking data points
Facebook, perhaps, is one thing, as we generally choose what content we put there. But not all of us realise that our data is being infiltrated throughout the day. “Today, a person has two or three dozen sensors on them. A modern-day car has 500 sensors, and there are 600 sensors in the modern home. All of these things are generating information,” says Jonathan Ramsay, a cyber security officer at Secureworks in the United States. According to Ramsay, Data breaches have become both more common and more severe in the last three years. The World Economic Forum’s Global Risks Report 2018 concluded that cyber-attacks that were previously considered large-scale are today seen as normal. Earlier this week, WhatsApp, the communication app used by over a billion people, was also reported to have had a breach of data.
So how can the general public be protected? Beyond idle dinner party conversations about uncannily perceptive online advertising, most of us remain largely apathetic about the reality of data profiling, raising the question as to who the role of protecting our data must fall to. Do social media giants and other communication platforms now have a duty of care to their clients to treat their personal data with respect, and caution?
“I think, philosophically, people should have as much control as possible over their data—this personal information should be in the hands of the people themselves,” says Gen.T honouree Stephanie Lee Sy, the founder and CEO of Thinking Machines, one of the top data science teams in the Philippines. “We are only now discovering the social cost of storing every data point that can be captured by a device, but we’re still not focusing enough on that. At the moment, these tech companies are only looking at storage costs, but they should consider other, moral factors too.”
Sy suggests a compromise, which she describes as data half-life. Tech companies currently store all the data they can from each individual, but we are yet to seriously ask the question about whether they should. “In my opinion, data should have a half-life, and then companies should get rid of it,” Sy says. “Tech companies are trying to develop valuable data, but you don’t need months and months of unaggregated data to get a good profile on someone—nowhere near it in fact. I would suggest getting deep data on someone for the first 24 hours, then aggregated per month and use that to go forward—it is a big enough picture for advertisers but is a huge step up in terms of data privacy. I think tech companies should shift away from capturing everything into capturing smart things.”
Equally, governments are going to have to enforce far stricter data privacy rules than they currently do. The European Union is leading the user privacy discussion with its General Data Protection Regulation (GDPR), which has built a strong legal foundation for securing end-user data in Europe. In the US, device privacy laws vary depending on the sector, state or data type. Recently, California implemented a new law that governs data security on a state level. Technology leaders, meanwhile, are pushing for federal privacy laws, and are beginning to see privacy as a human right.
We need to get to a point where tech companies shift away from capturing everything towards capturing smart things
— Stephanie Lee Sy
“The EU is being very savvy,” says Sy. “In the big picture, this is a very sensible move. As far as policy goes, [GDPR] is restrictive, and has to go through a few more iterations until it works smoothly, as it’s largely been made by people who focus on policy rather than tech, but I’m glad it exists.”
“Asia, meanwhile, is all over the map,” she continues. “The Philippines’ data policy is close to GDPR, but in the rest of Asia it doesn’t exist. This means AI companies that have been launched in Asia have an edge over American or European competitors because loose regulations mean they can grow faster. This will hurt the consumer in the long run, but we don’t know exactly how yet.”
Chinese consumers have always been particularly vulnerable, but data privacy laws are being rapidly updated by Beijing. Last year, a survey showed 85 percent of people in China had suffered some sort of data leak—including stolen bank account details and addresses. Until now, China has taken little action, but as of this year, the Chinese government has been keen to publicise the issue.
In many ways, it plays into Beijing’s hands—for party officials, public anger is a useful tool to pressurise the country’s tech giants into giving up more control of data to the State. Beijing is now rapidly passing a series of regulations, which play to China’s very specific view of “data governance”—a concept that includes data protection, the need to make data-driven tech industries in China globally competitive, and the desire to control the ownership and cross-border transfer of data.
“This is a super exciting and really scary time,” says Sy. “Privacy is hugely important, but I worry that decisions of this sort remain in the hands of people who don’t know much about tech—as we know, governments are still led by people of a different generation who largely don’t understand the industry. More people urgently need to study computer science, even if they go into other fields later on. We have to face the fact that, from now on, tech will underlie everything, and if our policy makers aren’t tech savvy, the public will pay the price.”