As we spend more time online, social media websites have become an integral part of our lives. Given social media’s fast-paced nature, one is inevitably going to regret something that they posted online, be it a mistakenly used phrase, or a typo. On Facebook, you are able to edit your posts freely, at any time. In gist, the edit feature on social media websites allows users to modify the contents of their posts after it is published. For instance, you could edit your post to rectify a typo. Some Twitter users have also requested for a similar feature on their website for years. New Twitter CEO Elon Musk, in compliance, rolled out an edit button for paid subscribers in United States, Canada, Australia and New Zealand, with plans to eventually make the feature free for all users.
With the edit feature’s introduction to Twitter, an influential platform that is at times prone to misinformation, it is about time that we evaluate the edit feature. Specifically, in the online world, users under the cloak of anonymity, discourse could escalate easily and lead to defamatory statements arising. Hence, one could evaluate an edit feature in the context of defamation law. Two scenarios are particularly worth pondering: what happens when a social media post that is defamatory is edited to be non-defamatory, and more importantly, what happens when a post that is non-defamatory is edited to be defamatory?
Defamation on Social Media: A Background
Before discussing the edit feature, it is first worth briefly summarising the legal principles of defamation, and how it is applied in the context of social media posts. In order for defamation to be established, the claimant must show:
The statement is defamatory, in that it ‘lower[s] the claimant in the estimation of right-thinking people generally.’
The statement refers to the claimant.
The statement has been published to a third party.
These factors vary slightly in the social media context. For instance, emoticons are considered by the court when determining whether a statement is defamatory. While determining the meaning of the post, especially for a ‘tweet’, the ‘hypothetical reader’ would be ‘taken to be a reasonable representative of users of Twitter who follow the defendant.’ As tweets have a character limit of 280 (with a previous limit of 140 characters), it is also more fitting to adopt a hyper-impressionistic approach to determine the tweet’s meaning. However, for other social media posts with no word or character limit, or ones that repost websites or articles, a general impressionistic approach is likely sufficient, as posts with no word-count constraints require less ‘brevity and concision’, such that the fluid and hyper-impressionistic approach used for tweets is not as suitable.
Under the repetition rule in defamation, if someone republishes a defamatory statement, the person is treated as if they had made the allegation or statement, thus they would also be held liable. A retweet has been deemed a publication under English case law, while libel actions have been settled against Twitter users who retweeted allegedly defamatory statements in the UK. In these circumstances, legislation provides that the right of action would nonetheless accrue on the date of first publication (i.e., publication of the original Tweet), even if the retweet was done later. In the US, users or internet service providers (ISPs) who distribute defamatory statements are immunised under s 230(c)(1) of Title 47 of the US Code. However, it is noted that at the time of writing, the US Supreme Court is reviewing the immunity shield under s 230.
There are a variety of defences that the defendant could raise. Users or ISPs who disseminated the statement innocently have a defence where they could claim that they did not have knowledge or control over the statement. For instance, an ISP who received notice of the existence of a defamatory statement but did not remove it is unable to rely on the innocent dissemination defence.
With the understanding of how social media defamation works, we now turn to evaluating the edit feature.
Scenario 1: defamatory statement edited out
If someone posts a defamatory statement on a social media platform like Twitter, then edits the tweet to retract the words, this could be beneficial in minimising the harm done to the claimant, as the defamatory words are no longer published. Editing out a defamatory statement indicates an apologetic attitude from the defendant. If the claimant still decides to launch legal action, the edit, retraction, or apology could be ‘outlined in the offer as a step already taken on behalf of the defendant.’
Although the defamatory statement was nonetheless published, it was published for a relatively short period of time, so the damage done to the claimant’s reputation is diminished compared to a post that is ‘permanently’ published. This would not necessarily free the defendant from any liability, but it at least limits the monetary damages that they might have to pay. It is thus arguably a win-win situation for both the claimant and defendant.
The edit feature could also be used to insert an apology towards the defendant. Offering or making an apology is an effective way of displaying remorse to the claimant, as it limits the damage done to the claimant’s reputation. Additionally, an apology could be a mitigating factor for damages if the claimant decides to pursue a claim in court. Compared to adding tweets onto the same thread, or publishing a separate tweet, one could argue that inserting an apology (as well as editing out the statement) is more effective in minimising the harm done to the claimant.
This is because separate tweets are only visible to readers who scroll down, or press onto the user’s profile to view their tweets. However, one downside to posting out an apology is that due to Twitter’s character limit per tweet, there might simply be insufficient space for the defendant to flesh out their words in full. While that is true, an apology online should arguably still be encouraged to let the public know that the statements were defamatory in nature. If the apology is not also published online, and the tweet not edited or deleted, readers may still label under the misapprehension that the statement is true.
Deleting the Tweet is, of course, still the best way of ensuring that the claimant’s reputation is not harmed. However, the edit feature presents a plausible alternative. A difficulty arises with the edit feature, as both Facebook and Twitter still show the edit history of the posts, such that the defamatory statement will still be visible if one clicks into the edit history. The edit feature ultimately becomes a double-edged sword. On the other hand, it keeps the defendant accountable.
One may argue that it could provide a cautionary example for the defendant and other users to be careful of what they say online. On the other hand, it might not fully prevent the defamatory statement from spreading. Although it was formally retracted, deletion remains the only way to completely erase the tweet from the defendant’s profile. Hence, it seems that deletion and adding an apology is the optimal way of minimising the harm done to the claimant.
Scenario 2: Added Defamatory Statement
When a user posts something ‘normal’ (i.e. without any defamatory statements), then edits it to become defamatory, it is trite that the post would become defamatory content, and the user would become liable. However, a more problematic situation arises for those who have reposted the original post: taking Twitter as an example, if someone retweeted the original ‘normal’ Tweet, would they be liable for distributing the subsequently edited and defamatory Tweet?
The person who retweeted would be liable for defamation, but they could arguably make use of the innocent dissemination defence. As stated, the defence is applicable for users or ISPs who had no knowledge or control over the defamatory statement. The defendant could argue that since they retweeted purely based on the original contents of the tweet, they had no idea that they had distributed the inserted defamatory statement. If they had no knowing involvement in distributing the statement, then it is likely that the innocent dissemination defence could apply.
However, this could depend on the defendant’s knowledge. One’s retweets or reposts usually show up on their own feed only immediately after reposting it. Assuming that the statement was edited around 15 to 20 minutes after reposting, given the fast-paced nature of social media feeds, it becomes unlikely that a user could see the edited versions of the Tweet unless they click into their own profile, which shows their tweets and retweets. Yet not everyone would scroll through their own profiles. Hence, there remains uncertainty as to how, if at all, the defendant could have knowledge of the edited tweet. This would rely heavily on the defendant’s testimony and inferences, as we do not know what specific button the defendant pressed or what showed up on their feed. Also, social media is designed for posts to be fleeting, so it might even be difficult for the defendant to recall whether or not they saw the edited version of the tweet.
It is of note that for ISPs, their liability in both scenarios remains the same: they are liable for letting the edited defamatory post stay on their site if they had notice of its existence. With notice liability, and such an integral consideration in this scenario, one could raise questions about the position of the laws of England and Wales on the distribution of defamatory statements as opposed to the position in the US. Defamation ultimately aims to strike a balance between protecting one’s reputation from unwarranted harm and freedom of speech. Does notice liability for both users and ISPs, in US law’s words, ‘chill the freedom of Internet speech’? Or is it a way to protect the reputations of claimants, and to deter online users from abusing their freedom of speech?
While the edit feature provides an alternative way out for defendants in defamation claims, it perhaps creates new problems for both the original poster and for those who reposted the statement without knowledge. Social media users must be vigilant of what they post and repost online, while defendants should still be encouraged to delete the post and publish an online apology whenever any legal action arises. Twitter allows its users to edit posts in the first 30 minutes of posting, which makes it easier for people who retweet to notice the edits, and websites that allow unlimited time for edits (including Facebook) should reconsider their policy to prevent more legal liability.
 However, there appears to be a bug where the edit feature is unavailable for iOS users. Joseph Allen, ‘Facebook’s Edit Button Seems to Have Vanished, Leaving Users Frustrated’ (Distractify, 23 August 2022) accessed 25 February 2023.  Karissa Bell, ‘Elon Musk wants to make Twitter’s edit button free for everyone, report says’ (Engadget, 1 November 2022) accessed 25 February 2023.  Ibid.  Katie Langin, ‘Fake news spreads faster than true news on Twitter—thanks to people, not bots’ (Science, 8 March 2018) accessed 25 February 2023.  Sim v Stretch (1936) 2 All ER 1237, 1240.  Laura Scaife, Handbook of Social Media and the Law (1st ed, Informa Law from Routledge 2014) 147.  Lord McAlpine of West Green v Bercow  EWHC 1342 (QB) .  Ibid   Riley & Another v Heybroek  EWHC 1259 (QB) . Monroe v Hopkins  EWHC 433 (QB) .  Heybroek (n 9).  Scaife (n 6) 62.  Heybroek (n 9).  See for instance, Mark Sweney, ‘Lord McAlpine settles libel action with Alan Davies over Twitter comment’ The Guardian (London, 24 October 2013) accessed 19 February 2023  Defamation Act 2013, s 8(3).  Lauren Feiner, ‘Supreme Court justices in Google case express hesitation about upending Section 230’ CNBC (21 February, 2023) accessed 25 February 2023.  Scaife (n 6) 149.  Godfrey v Demon Internet (1999) 4 All ER 342.  Scaife (n 6) 108.  Ibid 100.  Ibid.  Ibid 108.  ‘Edit your Page post or see its edit history’ (Facebook) accessed 26 February 2023. ‘This is a test of Twitter’s new Edit Tweet feature. This is only a test’ (Twitter, 1 September 2022) accessed 26 February 2023.  'Why Twitter Shouldn’t Get an Edit Button' (Knapton Wright) <> accessed 16 February 2023.  Bunt v Tilley (2006) EWCH 407 (QB).  Robert Ribeiro PJ, ‘Defamation on the Internet’ (Obligations VII Conference, Hong Kong, 15 July 2014) accessed 17 February 2023.  Barrett v Rosenthal 40 Cal.4th 33 (2006).  Twitter (n 22).