Facebook vows to stamp out evil from Lanka’s timeline | Sunday Observer

Facebook vows to stamp out evil from Lanka’s timeline

3 June, 2018

Popular social media network, Facebook which recently attracted a barrage of criticism following a bout of anti-minority violence in Sri Lanka’s central district, has announced plans to step up efforts to quell spread of misinformation and hate speech. In a wide ranging exclusive interview with the Sunday Observer a fortnight ago, Director of Public Policy for Facebook in India and South and Central Asia, Ankhi Das says she is hopeful that a host of planned measures would ‘substantially reduce’ incendiary messages on the platform.

According to Das, new initiatives in the pipeline include the rolling out of digital literacy programs throughout the island, to address data privacy concerns and refining the current Artificial Intelligence tools of the platform with the help of Sri Lankan universities to tackle hate speech. The Public Policy Director, who oversees the company’s relationships with policymakers, elected officials, Government agencies and NGOs in India, South and Central Asian countries, also divulged plans to enhance the capacity of Sinhalese language content reviewers and said they intend to hire a full time Public Policy Manager dedicated to Sri Lanka soon.

Excerpts from the interview:

Q: What does it mean to head Public Policy in South Asia, a region where politics is so fractured, where sometimes communalism is normalised, where violence in mainstream politics is a given, where misogyny and sexism is rife? How does Facebook locate itself as a pro-social, gender sensitive, pro-democracy actor - in line with Silicon Valley ideals?

A: Our guiding principles are the community standards and irrespective of where we are located, that is what we live by. Our community standards pivot on the following principles. Our company’s mission is to give maximum voice to maximum people and to build meaningful communities. Implicit in that is this notion, respect and regard for human rights, principles and free expression. We further enhance it in the way we have crafted community standards. There is no place for hate speech on our platform.

We spent a lot of time recently talking to civil society in Sri Lanka where a lot of these issues that you have raised came up. Hate speech, credible violence, credible threats or physical bodily harm, targeting entire phenomena of gendered violence which woman minority groups and the LGBT community face both online and offline. So in South Asia, there are lots of these cleavages whereby they exist historically. Therefore, we apply our community standards disproportionately across all of those vectors. We recently released very detailed explanatory implementation standards in terms of how we enforce against hate speech, harassment and bullying and credible threats on our platform. There are reporting buttons on every piece of content on Facebook. In addition to that, our directional road map is to work very actively using Artificial Intelligence tools to prevent or reduce spread of misinformation and fake news on our platform.

Q:Going back to the incidents that took place in Sri Lanka in March this year, there is widespread accusation that incendiary material shared on your platform had facilitated organised mob violence against a minority community. What is Facebook doing to prevent or curtail hate speech and the spread of misinformation?

A: Today, if you toggle on a post you want to report on Facebook, you get a pop up which says ‘Give feedback to’ which has a whole lot of feels to that. One of the options there is to report something as false news and when that is tagged, it basically becomes a signal for our product teams to analyse and build tools to identify that particular type of speech as hate speech or a particular type of news as false news.

Another thing that is on our road map which we will be announcing later in the year, is that we will be working with third party fact checkers to reduce spread of misinformation. Newsrooms in media institutions have fact checkers. But we are not in that business since we are just a platform. We don’t have that expertise.

So, we are going to work with independent third party fact checkers who are operating based on the principles which the Poynter Institute has provided and which is the kind of gold standard for fact checking, etc. We are currently in the process of identifying partners in Sri Lanka who can come on board with our third party fact checking partner network. We have already announced this in India and we will do it in Sri Lanka as well when we come back later in the year.

We will work with these groups to start addressing both these at a product level so that you can report false news and then there will be third party fact checkers who can check reported content. Thereafter, we can actively introduce product flows in Sri Lanka where a piece of false news is down ranked so that it does not get any distribution. When it does not get any distribution, it will then not exist on the platform and therefore does not get any virality. So, once all of that kicks in both in terms of a product remedy level as well as a partnership lay level with the third party fact checkers, a lot of the events you saw playing out in March, perhaps will not recur.

Q:In March, it was reported that Facebook removed more than 100 messages that provoked violence only after the Sri Lankan Government raised the matter. Until then, the Facebook review team did not find them worthy of being taken down despite numerous user requests. Why does this happen?

A: Let me set context. We live in extremely volatile times where when a real world event happens, it spreads. There are two ways in which content gets reported to Facebook. Users report content to us all the time which gets reviewed and there is action taken. The second way that content gets reported to Facebook is that under the operating laws globally, different countries have different authorised agencies and appropriate legal authority to send requests to Facebook which then gets reviewed for violation of community standards and violation of international standards, for legal sufficiency – our legal teams review whether it is the right agency sending the requests and that content gets action-ed.

We just released our Transparency Report online where we have given a fairly detailed breakdown of say, for example, how much of hate speech related or terrorism-related content was action-ed on the platform. This is something we do on a daily basis.

Prior to the event in March, there is a difference in the quality of content being reported. Sometimes they are accurate or sometimes there are no violations and these we ignore. But we can broadly tell you that in the pre-March stage, on the basis of the user-reported content reported to us, we action-ed almost 90% of that content. We took it down.

We have been in regular touch with the Sri Lankan agencies like SL CERT and the TRCSL. We have a three-pronged test in evaluating those demands. We evaluate for legal sufficiency, violation of our community standards and the international standards like the Human Rights standards. We evaluate against these parameters and take appropriate action.

Q: So, you do accept the fact that 10% of these inflammatory harmful posts are generally not taken down even after they are reported?

A: Sometimes, mistakes do happen and when that is brought into our notice, we try and respond to that rapidly. We also apologised in terms of what happened. After the service was shut down, we came back and spoke to the Government. There are two or three things which came up as part of that which we are addressing. One feedback we got both from our community, the civil society and the Government is that we need to beef up our Sinhala language expertise. This is because in a lot of these situations, two things that are especially important are context and linguistic nuance.

The way you speak Sinhala, even if I am able to learn the language, I may not speak it that way because I will not pick up the colloquialism. So, as language nuance and context are very important, we are enhancing our capacity of our Sinhala reviewers in our content review teams in our 24/7 global operation centers. We will also be hiring a full time Public Policy Manager for Sri Lanka soon.

Q:Facebook had earlier said the company has 14,000 censors who review reports in more than 40 languages and there are plans to increase the global team to 20,000 people this year. How many reviewers do you have for the Sinhalese language at the moment?

A: We can’t disclose those numbers at this stage. Not just in Sinhala, we don’t disclose the number of reviewers which we have in any language for that matter. There are lots reasons for that. Privacy, security etc. and we take all these into consideration. But, we can affirm to you that we have Sinhala language reviewers who are looking into this.

Q:The lack of unbiased Sinhala-language moderation is regularly cited as one of the root causes why pages regularly post abusive content. These types of content are allowed to thrive online, despite sustained reporting from concerned users. Could you elaborate on the bias of reviewers looking at user generated reports in Sinhala as Facebook again, offers no transparency around this?

A: The way we have written up our Community Standards and the way we have written our implementation standards, there is very little room for discretion in that. It is very prescriptive in terms of how for instance, if there is a hate speech standard and there is a credible threat sort of metrics in that, reviewers will have to apply that criteria when they are reviewing a post. In addition to that, Facebook periodically does quality audits of the performance of each of these reviewers. So, once you eliminate discretion in decision making and you just have to apply the rule, what happens is that you try and eliminate the selection bias you just talked about.

Q: Going back to the March incident, could you explain what went on behind the scenes between the Sri Lankan Government and Facebook that ultimately led to the Government lifting the ban, about a week later?

A: Well, we came here and had detailed conversations with the Facebook community, civil society and Government. We have actually talked more to civil society than we have talked with the Government. We also issued a statement back then in terms of what we will do going forward.

Again, during our visit this time, we have also had pretty intense conversations with the civil society and we are using this opportunity to take feedback. So, based on the conversations we had and the active listening we did to take feedback, we have picked up three things. There is a need to do additional literacy, safety and training. So, we did a lot of groundwork to identify the right partner to be able to scale up digital safety and digital literacy programs across the country targeted at the youth, with a curriculum that is locally relevant. We have partnered with Sarvodaya which will now go and create further networks with local organisations and non-profits to reach the communities.

Q:ith regard to your initiatives with Sarvodaya, were these planned ahead of the March incident or was it something Facebook decided only after the meet up with the Government officials?

A: This had been a work in progress. We were planning to do it in any case. But you could say, we took a lot of learning from the March event. There was always an urgency but you could say, it is an acceleration of efforts on our part.

Q:There is a common perception that Facebook’s content filtering system is reactive rather than proactive. How are you addressing this concern?

A: Like all online platforms, we are an intermediary. Our communities are our best watchdog and that is why we have reporting buttons on every piece of content.

Q:Given the large scale of data involved, do you think there is a lack of resources from your part which has weakened the effectiveness of your content-moderation systems?

A: I think there is a short term and a long term in this. In the short term, it is the relying on a combination of algorithmic tools as well as human reviewers. But if you look at the mass of content that gets posted on and shared on Facebook, it is literally massive. So, the eventual resolution will come from development of Artificial Intelligence tools. Our next phase of work in Sri Lanka will be to work with your University system to understand what kind of language corpora they already have. For instance, when our community standards say we do not allow content that slurs, the local language competency particularly colloquialism – which is a very important aspect in defining that data bank of slur list. As we get into building that slur-list we will determine the type of slurs we will disallow on our platform. We will be working actively with your civil society networks as well as tapping into the expertise represented by the University system. These efforts will help us to refine our Artificial Intelligence tools. We are fairly confident this will lead to substantial reduction of the type of content which everybody is concerned about in Sri Lanka. It’s a big commitment we have.

Q:Is there a time-frame that you can provide to vouch for this substantial reduction of harmful content?

A: Currently, we are working on an accelerated basis. We have started off building these networks and people have escalation channels. You have your regular queues and there are these escalation channels provided for both in the non-government sector and the Government sector. Even as of today, there has been a far-improved turnaround time and response time. Artificial Intelligence tools gets into build tech products has its own life cycle and takes its own time. I am not able to confirm to you today on that timeline but in terms of hate speech, there has already been a substantial reduction in the few months or so.

Q:The recent Cambridge Analytica data scandal exposed the vulnerability of personal information of millions of Facebook users. Since the fiasco broke out, what has Facebook done to address the problem of data breaches?

A: Subsequent to the Cambridge Analytica incident, we have announced a lot of programs and changes. We have changed a lot of our policies for third party applications and the permission they have to access Facebook user data. Remember that this is public profile information of users. These are going to be further restricted now. We are auditing all the third party apps which had access to personal data for any potential misuse. Whenever we become aware that these violations have happened we will kick them off from the platform after making a public disclosure.

Thirdly, we are putting control in user’s hands. If you as a user had been using a lot of third party applications, and you have given them access to your personal data, if you have not used one of those apps for more than 90 days, we are going to disable access of that third party app to your data.

Sometimes, people give permission to a lot of third party apps and they forget to keep track. So, we will show you on top of the news feed and unsubscribe permissions to these apps. We will also promote in product notifications which you will get periodically requesting users to get a privacy health check. Also there are a lot of in product privacy education which we are doing.

(Pic: Samantha Weerasiri)

Comments