top of page
Writer's pictureSara Snakenbroek

Surveillance capitalism and social media companies; has the time for regulation finally come?

Even if knowledge on surveillance capitalism is spreading, notably via Shoshana Zuboff”s book and documentaries, such disguised processes escape most internet users’ awareness, covering 59% of the global population in 2020. [1] The number of social media active users has grown to over 3.6 billion people in 2020 and are likely to reach 4.41 billion by 2025. [2] As a transnational issue impacting about half of the world’s population, only a few are aware of the information big tech companies collect and the influence they eventually exhibit as a consequence.


Surveillance capitalism appears ethically problematic as this business operation deprives internet users of the choice of opting out from the collection and analysis of their personal data. As a result, this bears the risk of being excluded from social media, if they oppose, and becoming digitally isolated from what we may call the digital society. The extraction of people’s personal and behavioural data allows social media to customise their content to display only information that interests users. [3] Such practices, while catching the attention of users and therefore profiting these companies, may create conspiracy theories through the loss of diversified information and opinions, which many people are manipulated to believe in. Shoshana Zuboff identifies it as the ‘disinformation age’ [4] where fake news spread much faster than genuine information.


Content moderation aside, the social media companies’ business model also profits by selling individuals’ data to other businesses and advertising companies. Facebook uses the behavioural surplus collected through digital traces, such as what a person might like, read or post on social media, without their awareness. [5] Such data, which are considered different to personal data, pertain to how a person might react or engage with content, their social media behaviour, and their searches and online browsing. [6] These data are analysed to predict future behaviours or actions, [7] and are sold to businesses for advertising purposes and microtargeting, better known as customised advertising. [8] Indeed, through analysis, data show how people bearing certain characteristics will be more likely to do certain actions, such as buying certain products. [9] Such information is used by advertising companies to target social media users with advertisements specifically designed for them.


Data evaluation can display the feelings of individuals: their distress or anxiousness; and informs companies what and when to publish on someone’s account, to their commercial benefit. [10] This use of behavioural data pertains to manipulation and the abuse of the vulnerability of individuals for business purposes[11], enticing them to buy products sometimes by taking advantage of their emotional hardship. Thus, social media users have transformed to the "product" and are no longer the customer; with the actual customer eventually being businesses and advertising companies. [12]


However, as the issue surrounding personal data is getting more recognised, regulation has started at the national and European level to protect internet and social media users. Social media companies have had to adapt themselves to such regulations and developed a use out of behavioural and biometric data that does not enable the identification of individuals. Such residual data can be found on pictures posted on Facebook where the diversity of facial features, facial muscles or even blinks, eye movements, age, ethnicity and gender as well as emotions or intentions become the subject of interest. [13] These unique characteristics stored in biometric data are sold to businesses developing facial recognition systems to further improve their efficacy, which can then be sold, i.e., for military operations or to authoritarian regimes. [14] Such use of data poses serious ethical as well as legal problems as it undermines human rights and democratic governance, without the users’ awareness or consent.


The Facebook/Cambridge Analytica scandal of 2018 is a significant case study to illustrate the issue of surveillance capitalism and its impact on society. Christopher Wylie, a former employee at Cambridge Analytica, a political consulting firm, revealed to the world how the personal and behavioural data of millions of people, collected by Facebook, had been used for political advertising, notably in the US 2016 presidential elections and Brexit Referendum. [15]


The pro-Brexit campaign paid 40% of its budget, totalling £2.7 million, in advertising. [16] Such advertising, for political rather than commercial purposes, works in the same way as explained previously. Advertisements were targeting most particularly individuals likely to change their opinions in the elections or the referendum. [17] Through ‘carefully crafted messaging’ [18], the aggressive and customised advertising, conveying fake news on the EU and immigration, resulted in individuals’ behavioural modification and influenced such democratically-claimed political processes.


The Facebook/Cambridge Analytica manipulation of users’ personal data demonstrates particularly well ‘surveillance capitalism’s tactical requirements… designed to produce ignorance through secrecy and the careful evasion of individual awareness’. [19] Similarly, the social experiment behind the ‘I voted’ button introduced on Facebook led to 340 000 more votes in the 2010 mid-term US election, numbers that can definitely affect an election.[20] Facebook’s involvement and influence in politics but also society in general by collecting, analysing or selling its users’ data, not only endangers democracy but certainly challenges people’s autonomy.


Beyond ethical issues, the collection and procession of personal data when using social media platforms constitutes a breach of every individual’s privacy. Personal behavioural and biometric data exhibit the identity of individuals as well as their intimacy. As mentioned above, the data surplus that such companies use come from direct actions such as likes, posts or reads on social media, but also the duration of users’ attention spent on a particular piece of information. [21] Such collection of data allows social media companies to know the gender, age and location of people[22], but also intimate details such as their sexual orientation and political opinions, which they might not wish to disclose. [23]


Social media companies know more about a particular individual than their own family or partner, i.e., from the number of likes a person puts on social media. [24] Regarding the use of data for targeted advertising, the people advocating against regulation usually defend their argument by saying ‘nothing to hide, nothing to fear’. [25] Some individuals might even enjoy this type of advertisement as the products or services suggested are likely to be of interest. However, individuals are most probably not aware of the manipulation behind such a marketing technique based on using and analysing personal and behavioural data.


As a recognised human right on both the international and national scene, [26] privacy must be protected. Not only the safety of people’s privacy must be ensured, but people must also be protected from companies manipulating them with their own data at moments of vulnerability. This is certainly a strong ground for regulation and indeed, more and more legislations are in place to address the issue of the collection of personal data on the internet and social media.


Another reason for social media regulation lies with the influence such platforms have on politics, issues of public governance and on society, especially in countries where social media platforms constitute the main source of information for citizens. The Facebook/Cambridge Analytica scandal or the ‘I voted’ button campaign on Facebook have demonstrated how the platform has become a political player. [27] It vehiculated political content and advertisements by influencing user opinion and swaying elections in one direction or another. Clearly, the spread of fake and misleading news is endangering democracy and social media should be regulated to protect the equilibrium of fair democratic political debates and processes.


Social media have also been used to spread hate messages and propaganda, increasing the "feeling" of belonging to groups of certain ideologies or even by inviting people to join terroristic networks. Such use can lead to terrible consequences and breaches of human rights. The facial recognition systems used in China to aggravate the oppression of the Uighurs is one example. Another example was the misuse of Facebook to incite violence in Myanmar, which ultimately led to genocide, causing the murder and forced migration of the Muslim Rohingya ethnic minority. [28]


Aside from political motivations, such manipulation targets any social media user who is subjected to the mercy of businesses aiming to promote behavioural influences for commercial purposes. [29] Social media, therefore, have become dangerous fora when harmfully misused for political or commercial uses. Regulation must, therefore, be enforced to tackle the extraction of users’ personal behavioural and biometric data, and the content displayed on social media platforms.


As some argue, regulating the content of social media, most particularly fake or deceptive news bears the risk of limiting freedom of speech, a human right, widely protected by both national and international legislation. In the UK for example, the HRA 1998, protects the right to freedom of expression. [30] The risk exposed by individuals advocating for no governmental regulation of social media content pertains to this right to free speech, praised by Western democracies. In China, North Korea and other countries, freedom of speech and expression is censored and restricted. [31]


Regulating social media content and restricting individuals’ ability to express themselves when publishing false or misleading information would amount to a form of censorship and could undermine democracy as well as human rights. This argument supports the self-regulation of big tech companies to protect the right to free speech from governmental restrictions.


The line between freedom of expression and the misuse of that right is thin. [32] The current use of social media platforms to diffuse fake news, which can manipulate people, appears even more dangerous than limiting the right to free speech to strict instances of false information or diffusion of hateful content. A form of regulation is definitely needed but can only be achieved by protecting the fundamental right to freedom of expression.




Endnotes


[1] 'Global Digital Population as of October 2020' (2020) accessed 3 January 2021.

[3] Shoshana Zuboff, The Age of Surveillance Capitalism, The Fight for a Human Future at the New Frontier of Power (Profile Books Ltd, 2019) iii (The Definition), 506

[4] ibid 507

[5] ibid 68

[6] 'Professor Shoshana Zuboff on Surveillance Capitalism Q&A' (Findingctrl.nesta.org.uk, 2021) accessed 6 January 2021.

[7] Zubboff (n3) 8

[8] ibid 64

[9] ibid

[10] Shoshana Zuboff on Surveillance Capitalism, VPRO Documentary (20/12/2019) (Youtube.com, 2019) accessed 3 January 2021.

[11] Zubboff (n3) 92

[12] Isabelle Landreau, Gérard Peliks, Nicolas Binctin and Virginie Pez-Pérard, ‘Rapport: Mes Data Sont à Moi, Pour Une Patrimonialité Des Données Personnelles’ (GenerationLibre, 2018) 118

[13] Zubboff (n3) 282

[14] Zubboff (n10)

[15] Rosalie Chan, 'The Cambridge Analytica Whistleblower Explains How the Firm Used Facebook Data to Sway Elections' (Business Insider, 2019) accessed 3 January 2021.

[16] Deutsche (www.dw.com), 'What Role Did Cambridge Analytica Play In The Brexit Vote? | DW | 27.03.2018' (DW.COM, 2021) accessed 3 January 2021.

[17] Zubboff (n3) 278

[18] ibid 277

[19] ibid 280

[20] Sarah Joseph, 'Why the Business Model of Social Media Giants Like Facebook Is Incompatible with Human Rights' (The Conversation, 2018) accessed 3 January 2021.

[21] Zubboff (n3) 68

[22] ibid 92

[23] Zubboff (n10)

[24] Joseph (n20)

[25] 'Privacy and Surveillance Capitalism: Debord’s Reality in 2020' (NICKELED AND DIMED, 2020) accessed 5 January 2021.

[26] Universal Declaration of Human Rights (1948) art. 12

[27] Joseph (n20)

[28] 'Facebook Admits It Was Used To Incite Violence In Myanmar (Published 2018)' (Nytimes.com, 2018) accessed 4 January 2021.

[29] (n6)

[30] Human Rights Act 1998, art. 8

[31] ibid

[32] ibid

613 views0 comments

Comments


bottom of page