Marcus Rashford, a professional footballer and activist who was recently awarded an MBE for his work in fighting food poverty, became the latest high-profile victim of online racial abuse within the UK sport sector. The messages, undisclosed to the public, were sent to Rashford on Instagram. It has reignited the ongoing conversation about social media regulation, and what can be done to stamp out online abuse.
Rashford took to Twitter, explaining that such messages illustrate “humanity and social media at its worst.” Yet, in today’s society, this has unfortunately become commonplace. In the past few weeks alone, Reece James, Romaine Sawyers, Axel Tuanzebe and Anthony Martial have all been targeted on social media.
Facebook (who owns Instagram) has stated that:
Greater Manchester Police posted a tweet condemning the actions and explained that:
As the UK continues to consider, almost two years later, a bill that was proposed in April 2019, it is clear that the UK is moving slower than other countries around the world, who have increasingly imposed and updated strict legislation to regulate social networks.
Let’s take a look at how social media regulation has been tackled around the world.
Social media regulation in Germany
In Germany, the Network Enforcement Act (“the Act”) (Netzwerkdurchsetzungsgesetz) was introduced in January 2018. The Act, also known as the Facebook Act, is aimed at fighting hate crime, fake news and other unlawful content on social networks.
The Act binds operators of social networks to a number of obligations, including the implementation of an easily recognisable procedure for reporting criminally punishable content. Social networks must then take notice of the reported content immediately and must take down or block access to manifestly unlawful content within 24 hours of receiving a complaint. Other criminal content must be removed or blocked within seven days of receiving a complaint. Alternatively, social networks may refer content to a “recognised institution of regulated self-governance” on the understanding that they will accept the decision of that institution. Notably, users must be informed of all decisions taken in response to their complaints and provided with justification.
Social media networks that fail to follow the obligations will commit a regulatory offence. This offence is punishable with a fine of up to €5m against the person responsible for the complaints management system. The fine against the company itself can be up to €50m. If a social network does not comply with reporting duties, a fine may also be imposed. In July 2019, Facebook was fined €2m for violating the provisions of the Act, after the Federal Office for Justice found that the reporting form was “too hidden” and therefore, Facebook presented a much lower number of complaints than other social networks.
However, the Act has faced criticism for its scope. As Germany utilises a system of civil law, the scope of the Act is limited to 21 provisions within the German Penal Code, ranging from insult to malicious gossip and defamation. This approach has been criticised as no new laws have been imposed but rather existing laws have been grouped together within one new piece of legislation; critics have speculated that this approach does not go far enough.
Social media regulation in Australia
Australia has introduced a number of pieces of legislation governing social media. In 2015, the Enhancing Online Safety Act (“EOSA”) was introduced. The EOSA was introduced with the objectives of promoting online safety, preventing online harm and protecting Australians online. Initially introduced to protect the safety of children, the EOSA was amended to protect all Australians as of 2017.
Social networks are split into two categories within the EOSA: Tier 1, networks which have “opted-in”; and, Tier 2, networks declared by the Minister for Communications. Currently, Tier 1 networks include Snapchat and Twitter, whilst Tier 2 networks are made up of Youtube, Facebook, and Instagram.
The EOSA established an eSafety Commission as an independent statutory office to investigate and act on complaints. The civil penalties scheme within the EOSA provides the eSafety Commission with a range of powers, including:
- issuing a formal warning;
- giving a remedial direction;
- issuing an infringement notice;
- accepting an enforceable undertaking; and
- seeking an injunction or civil penalty order in Court.
Current legislation in the UK
Currently, the UK has a number of pieces of legislation which govern communications online, including, but not limited to:
- Protection from Harassment Act 1977, S.4 (Fear of Violence)
- Offences Against the Person Act 1861, S.16 (Threat to kill)
- The Malicious Communications Act 1988, S.1 (Threat); and
- The Communications Act 2003, S.127 (Improper use of public electronic communications network)
The Communications Act provides that “a person is guilty of an offence if he sends by means of a public electronic communications network a message or other matter that is grossly offensive or of an indecent, obscene or menacing character; or causes any such message or matter to be so sent.” Notably, in Chambers v DPP, the Division Court held that messages sent by Twitter are accessible to all who have internet access and therefore, are sent via a public electronic communications network. In order to prove guilt, the defendant must be shown to have intended or be aware that the message was grossly offensive, indecent, or menacing.
Likewise, the Malicious Communications Act deals with the sending to another of an electronic communication which is indecent, grossly offensive, false or conveys a threat, provided there is an intention to cause distress or anxiety to the recipient. Notably, there is no legal requirement for the communication to reach the intended recipient.
Although UK law allows users to report illicit content to the police, there is currently no legislation covering social media regulation. However, one key piece of regulation has been in progress for quite some time, leading to heavy criticism for long delays in its implementation.
The Online Harms Bill (“the Bill”), first proposed by Theresa May’s government in April 2019, sets out strict guidelines governing the removal of illegal content such as terrorist material and media that promotes suicide. Social networking sites must obey these rules or face being blocked in the UK.
Regarding the Bill, Matt Hancock stated, “[w]e want to make the UK the safest place in the world to be online and having listened to the views of parents, communities and industry, we are delivering on the ambitions set out in our Internet Safety Strategy.”
Ofcom has been announced as the regulator for the Bill and will have the power to levy fines of £18m or 10% of a company’s annual global turnover. For example, Twitter could face fines of up to £345.9m for serious breaches of the Bill, once implemented. Furthermore, Ofcom will have the power to block non-compliance services from being accessed in the UK.
Although the Department for Digital, Culture, Media and Sport (“DCMS”) has stated the legislation will be ready for this parliamentary session, the DCMS Minister, Caroline Dinenage, stated she could not commit to bringing it to parliament next year. After this, Lord Puttman said the Bill may not come into effect until 2023 or 2024.
Time will tell as to when the Online Harms Bill may form part of UK legislation. What is clear is that the systems currently in place are not adequate and need to be reformed. As other countries around the world are beginning to make amendments to legislation which has been in place for a number of years: we have to ask why the UK has placed such a critical piece of legislation on the backburner.
For a look at how the GameStop saga raised similar questions about social media regulation, click here.
 The Communications Act 2004, S.127
  EWHC 2157 (Admin)
 The Malicious Communications Act 1988, S.1
 As of Fiscal Year 2019 Annual Report – Twitter.