Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
Democracy that banks on the electorate
access_time 28 March 2024 5:34 AM GMT
Lessons to learn from Moscow terror attack
access_time 27 March 2024 6:10 AM GMT
Gaza
access_time 26 March 2024 4:34 AM GMT
The poison is not in words, but inside
access_time 25 March 2024 5:42 AM GMT
A witchhunt, plain and simple
access_time 23 March 2024 9:35 AM GMT
DEEP READ
Schools breeding hatred
access_time 14 Sep 2023 10:37 AM GMT
Ukraine
access_time 16 Aug 2023 5:46 AM GMT
Ramadan: Its essence and lessons
access_time 13 March 2024 9:24 AM GMT
When ‘Jai Sree Ram’ becomes a death call
access_time 15 Feb 2024 9:54 AM GMT
exit_to_app
Homechevron_rightWorldchevron_rightNot moderating content...

Not moderating content based on 'inaccurate' information: Facebook

text_fields
bookmark_border
Not moderating content based on inaccurate information: Facebook
cancel

San Francisco: Facebook has dismissed media reports claiming that thousands of its content moderators rely on inaccurate and disorganised information to determine what content to allow or remove from its platform.

Reacting to a report in The New York Times that accused Facebook of being "ad hoc", "disorganized", "secretive", and doing things "on the cheap", the social media network on Saturday said the debate on content moderation should be based on facts not mischaracterizations.

The documents that are used to guide Facebook's moderators span more than 1,400 pages which often contain inaccuracies and outdated information, said the Times report on Thursday.

"The Times is right that we regularly update our policies to account for ever-changing cultural and linguistic norms around the world. But the process is far from 'ad hoc'," said Facebook in a response.

The company said it makes changes to policies based on new trends that its reviewers see, feedback from inside and outside the company -- as well as unexpected changes on the ground.

"What the Times refers to as a gathering 'over breakfast' among 'young engineers and lawyers' is, in fact, a global forum held every two weeks where we discuss potential changes to our policies," said Facebook.

The team responsible for safety on Facebook is made up of around 30,000 people, about 15,000 of whom are content reviewers around the world.

"When discussing our efforts to curb hate speech in Myanmar, the Times incorrectly claims that a paperwork error allowed an extremist group to remain on Facebook.

"In fact, we had designated the group - Ma Ba Tha - as a hate organisation in April 2018, six months before The Times first contacted us for this story.

"While there was one outdated training deck in circulation, we immediately began removing content that represents, praises or supports the organization in April - both through proactive sweeps for this content and upon receiving user reports," explained Facebook.

The report also claimed that the content moderators rely on material based on incorrect interpretation of certain Indian laws.

One of these documents tells moderators that any post degrading an entire religion violates Indian law and should be flagged for removal.

Another document for moderators instructs them to "look out for" the phrase "Free Kashmir" -- though the slogan, common among activists, is completely legal, the report said.

The moderators are even warned that ignoring posts that use the phrase could get Facebook blocked in India.

Earlier this month, Facebook refuted another New York Times report that claimed it allowed large technology companies and popular apps like Netflix or Spotify access to its users' personal information.

Facebook said it did not give large tech companies access to people's data without their permission as its integration partners "had to get authorisation from people".

Show Full Article
Next Story