Regulating the social media | Sunday Observer

Regulating the social media

26 May, 2019

In the wake of the recent calamity, the Sri Lankan Government was compelled to block a number of social media networks during the past few weeks. They were temporary measures which lasted only a few days. The Telecommunications Regulatory Commission (TRC) Chairman revealed that the Government intends to seek legal advice to introduce regulations to curb hate speech on social media.

These extraordinary steps reflect the growing global concern, particularly, among governments, about the capacity of social networks to spin violence. According to statistics...., there are six million active social media users in Sri Lanka of which Facebook users have increased to five million in 2018. Incidentally, more women between the ages of 18-24 are on Facebook than men.

At present Facebook is at the centre of attention because it stands accused of not doing enough to weed out fake posts and hate speech generated by a small but active number of its users.

The minority

The social media is a recent invention. The two most popular websites, Facebook and YouTube, and WhatsApp were founded in 2004, 2006 and 2009 respectively. They may be new, but they are huge as statistics reveal that regularly, 2.2 billion people use Facebook, 1.9 billion YouTube and 1.5 billion WhatsApp.

There are many other social media forums, based all over the globe, with different focuses of activity, all with the purpose of ‘social networking’, or connecting people to express themselves and interact with each other.

The majority of people using social media are decent, intelligent and inspiring. The problem comes with a small minority who spoil it for everyone else.

How can this minority ‘spoil it’ for everyone else? There are a number of ways: cyber bullying, revenge porn, trolling, virtual mobbing, posting hate messages. Trolling happens when someone creates conflict on sites by posting messages that are controversial or inflammatory. Virtual mobbing occurs when a number of individuals use social media to make comments about another individual, because they are opposed to that person’s opinions.

In recent years, we have seen a trend of conspiracy theories on social media platforms, fake Twitter and Facebook accounts stoke religious social tensions, external actors interfere at elections and criminals steal troves of personal data.

What are the experiences of other countries in regulating social media? Taking a few examples:

German experience

In Germany, a law came into force in January...... that foresees fines of up to €50 million for social media platforms that fail to remove hate speech within 24 hours. “Social networks are no charity organizations that guarantee freedom of speech in their terms of service,” said the Consumer Protection ministry. Social networks, he said, had to comply with German law and not only with their own rules. “We cannot simply accept the fact that illegal fake news impact our democratic elections or that hate messages poison our public discourse,” the ministry added.

The German law forces major social networks to withhold certain comments or posts from users in the country if they are deemed illegal or offensive. Networks such as Facebook, Twitter or Instagram now face fines if they systematically fail to comply with the 24-hour deadline.

The German law, known as Network Enforcement Act, is an international test case. However, six months after the Law came into force, it has become clear that under the pretext of combatting ‘fake news’ and ‘hate comments’, the legal framework for the all-out censorship of the Internet has been established. The reports confirm that Google and Facebook company have seized on the law to launch a massive assault on freedom of speech and erect a comprehensive regime of censorship.

UK experience

The government has proposed measures to regulate social media companies over harmful content, including ‘substantial’ fines and the ability to block services that do not stick to the rules. It will run a consultation until July 1 on plans to create a legal ‘duty of care towards users’, overseen by an independent regulator.

At present, when it comes to graphic content, social media largely relies on self-governance. Sites such as YouTube and Facebook have their own rules about what is unacceptable and how users are expected to behave towards one another.

European Union

The EU is considering a clampdown, specifically on terror videos. Social media platforms would face fines if they do not delete extremist content within an hour. The EU also introduced the General Data Protection Regulation (GDPR) which set rules on how companies, including social media platforms, store and use people’s data.


Australia passed the Sharing of Abhorrent Violent Material Act on April 5, introducing criminal penalties for social media companies, possible jail sentences for tech executives up to three years and financial penalties worth up to 10% of a company’s global turnover. It followed the live-streaming of the New Zealand shootings on Facebook.

In 2015, the Enhancing Online Safety Act created an eSafety Commissioner with the power to demand that social media companies take down harassing or abusive posts. Last year, the powers were expanded to include revenge porn.

The eSafety Commissioner’s office can issue companies with 48-hour “take down notices”, and fines up to 525,000 Australian dollars (£285,000). But it can also fine individuals up to A$105,000 for posting the content.

Two sides

We have the Government on one side and the social media platform on the other, both claiming to be stewards of public interests. If we proceed further with this paradigm, we need to take three mutually agreeable steps to solve this problem.

First, both parties should agree on the content standards to be interpreted and operationalized on social media platforms. This has to be done through an inclusive mechanism. Second, both parties should establish a system of public accountability.

Third, both parties should make commitments, and be held jointly accountable, to public goals. The Government should engage the public on what constitutes ‘hate speech’ and ‘fake news’ so that user-flagging is more effective.