As with everything new, during the early days of the internet, there was no regulation of the digital sphere. Some twenty years later, a combination of self-regulation from the internet and social media companies as well as some governmental and European regulation is spotted. But domestic and supranational legislation fails to address the issue. Big tech companies claim to effectively self-regulate their fora. However, such self-regulation does not concern the core source of their profit - their business model - which they wish to keep secret.
Indeed, what social media companies fear the most is the introduction of regulations outlawing the exploitation of behavioural data and making ‘markets that trade in human futures’ illegal.  Examples of current self-regulation measures include transparency reports  on the removal of illegal content or consent requests through long terms and conditions aimed to discourage users from reading them. For example, YouTube, which is considered as one of ‘the most powerful radicalising instrument[s] of the 21st century’ , employs 10 000 people for illegal content moderation.  Considering the current scope of surveillance capitalism and its ‘profit-making business nature’ , self-regulation is clearly not enough, even coupled with governmental and European legislation.
One avenue would be to use competition and antitrust laws to prevent big companies, such as Facebook or Google, to monopolise the internet from buying their competitors (YouTube, Instagram, WhatsApp – all rebought already) and, therefore, extensively increasing their data sources.  However, considering the number of users and data each company singlehandedly possesses, such measures alone are unlikely to provide sufficient protection from the problematic business model of such companies. 
As this is a transnational issue that is largely transcending national borders, regulation must be taken at the international or at least the regional (e.g., European) level, with a possibility for extra-territorial provisions, taking example from the GDPR.  As it might prove difficult for a critical mass of states to reach consensus on a minimum threshold or standard of regulation, domestic or regional regulation appears to be more likely of an option.
However, it is precisely such a difference and incoherence between national laws that enables social media companies to flourish economically and avoid restrictions of their business models. This is why a strong international (or at least regional) regulatory system on the use of behavioural data is highly desirable to accompany existing legislation protecting personal data and internet content.
To achieve a healthier digital environment, the marketing techniques of microtargeting or customised advertising must be strictly regulated and monitored. Microtargeting causes several issues because:
- ‘it exploits personal data’  without consent,
- ‘it conceals its true nature’  making advertisements appear as regular content,
- it exploits its targets’ vulnerability as it contains pieces of disinformation that are used to manipulate the reader, whilst the privately advertised nature of such content cannot be easily challenged in the public sphere. 
To avoid a strict ban, measures such as more transparency on the use of behavioural data from companies or requesting the users’ consent for such use is the bare minimum. However, these must be designed in a way that is easy to read and understand (to be brief and clear) for the lay user and transparent to showcase the true nature of their use. Nonetheless, before achieving strict regulation or maybe a ban of such ‘profit-making’  business models, governments and citizens must be made aware of such manipulative processes using, for example, safe media of information resources. In a similar way, a public observatory could be established to raise awareness and transparency on social media functioning.
Surveillance-based business models have exploited behavioural data ‘as a free commodity’ , with no ownership recognised, in contrast to personal data. Regulating such models would therefore allow social media users’ ownership rights to be redressed. To achieve this, the establishment of an ombudsman , both independent and at the supranational level, could enable the creation of a fair and free complaint system for people against social media companies.
Combining such an organism with an international and independent regulator would ensure that those companies truly respect the frames in place to control their business model, while at the same time providing a redress system accessible to individuals. Such a regulatory body would play a role similar to Ofcom in the UK, but at an international level.  It could establish a code of practice regulating the use of behavioural data, a framework to assess compliance, oversee the implementation and take action in case of breach. 
As fines do not appear effective against such hegemonic companies, other types of sanctions should be considered in those cases where social media companies breach their obligations under such a new regulatory system. Further use of the academic community could also be envisaged in the form of research groups investigating the various methods of profit of such companies to prevent them from finding loopholes in legislation. 
All ways of ensuring better accountability and regulation of the business model of internet companies, together with the introduction of strict laws on the use of behavioural data and microtargeting, could enable social media users escape behavioural modification and automation, while also moderating such companies’ profit.
 Joanna Kavenna, 'Shoshana Zuboff: ‘Surveillance Capitalism Is An Assault On Human Autonomy’ (the Guardian, 2019) accessed 8 January 2021.
 'Social Media: How Do Other Governments Regulate It?' (BBC News, 2020) accessed 6 January 2021.
 Sarah Joseph, 'Why the Business Model of Social Media Giants Like Facebook Is Incompatible with Human Rights' (The Conversation, 2018) accessed 3 January 2021.
 Katie Jones, 'The Big Five: Largest Acquisitions by Tech Company' (Visual Capitalist, 2019) accessed 2 January 2021.
 Joseph n3.
 Laura Scaife, ‘The Effective Regulation of Social Media’ (2018) (PhD Thesis, University of Kent) 14.
 J. Heawood, ‘Pseudo-Public Political Speech: Democratic Implications of the Cambridge Analytica Scandal’ (2018) Information Polity 23 (4), 431.
 John Samples, 'Why the Government Should Not Regulate Content Moderation of Social Media' (Cato Institute, 2019) accessed 2 January 2021.
 Jane Andrew, ‘The General Data Protection Regulation in the Age of Surveillance Capitalism’ (2019) Springer Journal of Business Ethics 10.
 Peter Walker, 'Peers Call For Tougher Regulation Of Digital And Social Media In UK' (the Guardian, 2020) accessed 8 January 2021.
 House of Commons Briefing Paper 8743 (26/02/2020) 13.
 'Why Should We Regulate Social Media Giants? Because It's 2019, Prime Minister Trudeau' (The Conversation, 2019) accessed 8 January 2021.