FB to hire local fact-checkers to stem fake news | Sunday Observer

FB to hire local fact-checkers to stem fake news

20 May, 2018

Popular social media platform, Facebook plans to work with independent third-party fact checkers in Sri Lanka to stem the distribution of false news while working on improving its product through the use of technologies such as artificial intelligence, computer vision or machine learning.

In an exclusive interview with the Sunday Observer last week, Facebook’s Director of Public Policy in India, South and Central Asia, Ankhi Das said the firm is currently in the process of identifying partners in Sri Lanka who can join as fact checkers after obtaining necessary certification through the non-partisan International Fact-Checking Network, a unit of the US-based Poynter Institute.

“We will work with these groups to start addressing these at product level so that you can report fake news and there will be third party fact checkers who will check that reported content.

“Thereafter, we can actively introduce product flows in Sri Lanka where a piece of false news is down ranked and does not get any distribution. If it doesn’t get any distribution, it will not then exist on the platform and therefore, will not go viral,” the Public Policy Director, who paid a three-day visit to Sri Lanka from India, said.

In India, Facebook had last month partnered with a fact checking organisation called Boom, to help the social network fight the spread of fake news in the Indian state of Karnataka ahead of its elections. Since the new changes, the company claims that it has reduced the distribution of false news by as much as 80%.

In March this year, the Government which imposed a temporary ban on Facebook had alleged that the platform had facilitated the spread of false news which in turn fueled the escalation of violence in the Kandy district.

Responding to the concern, Das said the roadmap for Facebook is to also actively work to reduce the spread of misinformation on its platform by using novel technologies such as Artificial Intelligence.

“Users can give feedback on false news which basically becomes a signal for our review teams to analyse that and build tools which can identify a particular type of speech as hate speech or particular type of news as false news. When all of that kicks in, both at a product level and a third party fact checking level, lots of the events you saw playing out in March this year, will not recur,” Das emphasised.

In a recent post, Facebook had stated that Artificial Intelligence though promising is, however, still years away from being effective for all kinds of bad content because context is important. In the meantime, the platform noted that it is investing in technology to increase its accuracy across new languages.

“That’s why we have people still reviewing reports. And more generally, the technology needs large amounts of training data to recognise meaningful patterns of behaviour, which we often lack in less widely used languages or for cases that are not often reported. It’s why we can typically do more in English as it is the biggest data set we have on Facebook,” the platform said.

It is learnt that Facebook AI Research is working on an area called multi-lingual embeddings as a potential way to address the language challenge.

“And it’s why we sometimes may ask people for feedback if posts contain certain types of content, to encourage people to flag it for review. And it’s why reports that come from people who use Facebook are so important – so please keep them coming. Because by working together we can help make Facebook safer for everyone,” the social media platform which boasts of 2.2 billion users worldwide, highlighted.