Begin typing your search above and press return to search.
exit_to_app
exit_to_app
Homechevron_rightTechnologychevron_rightHate, fake contents...

Hate, fake contents from India escape Facebook algorithm: Report

text_fields
bookmark_border
Hate, fake contents from India escape Facebook algorithm: Report
cancel

New Delhi: A test account set up by Facebook in India showed that a Himalayan size of hate content, fake news and incendiary images from the country escapes Facebook's content moderation controls and screening algorithms, Bloomberg reported.

Within three weeks of signing up, the test account user's feed was filled with graphic photos of beheadings, manipulated images of India's airstrikes against Pakistan and jingoistic scenes of violence.

A 46-paged research note, titled "An Indian test user's descent into a sea of polarising, nationalist messages", one among the documents released by whistleblower Frances Haugen, revealed the results of the test. The author of the note termed the experience as an "integrity nightmare" and quoted a staffer, "I've seen more images of dead people in the past three weeks than I've seen in my entire life total."

Facebook started the test account on February 4, 2019, to determine how its algorithms affect what people see in India, one of Facebook's fast-growing markets. The test was designed to focus exclusively on Facebook's role in recommending content. The trial account used was the profile of a 21-year-old woman from Hyderabad, living in Jaipur. The user only followed pages recommended by Facebook, starting with the official page of India's ruling Bharatiya Janata Party and BBC India.

When the terror attack at Pulwama happened in Kashmir, taking the lives of 40 Indian security personnel and injuring dozens, the Indian government had attributed the attack to a Pakistani terrorist group. Soon, the test user's feed was filled with anti-Pakistan hate speeches, images of beheading and a graphic showing preparations to incinerate a group of Pakistanis, according to the research report.

Also, nationalist messages, exaggerated claims about India's airstrikes in Pakistan, fake photos of bomb explosions, a manipulated photo that purported to show a newly married soldier killed in the attack, who had been preparing to return to his family.

One of the posts claimed that after 12 days of the Pulwama incident, 12 planes attacked Pakistan. A group on Facebook named "Laughing and things that make you laugh" shared a "Hot News" post claiming the death of 300 terrorists in a bomb explosion in Pakistan. It said, "300 dogs died. Now say long live India, death to Pakistan."

The research report clearly states that how little control Facebook has, on screening content, in India. In 2017, fake viral messages had circulated in Facebook-owned messaging app WhatsApp about child kidnapping gangs. This had led to widespread lynching across the country, enraging users, courts and the government alike.

Frances Haugen's disclosure has already revealed Facebook's role in spreading harmful content in the US, but this experiment in India suggests that the platform's influence globally might be even worse. In India's case, Facebook struggles to hire people with language skills as the country has 22 official languages.

Most of the hate content were in Hindi. But Indians use a dozen variations of Hindi itself while there are many regional languages also. Again, many use a blend of English and their own language, making it impossible for the algorithm to screen content. In sum, Facebook finds it challenging to control, moderate and screen content in India.

But Facebook said that it has invested in technology to find hate speech in various languages, including Hindi and Bengali. It has reduced the amount of such content by half this year. Hate content against marginalised groups, including Muslims, is on the rise. So, it is improving enforcement, updating policies etc., to check the same. It has strengthened its hate classifiers by including four Indian languages, Facebook claimed.

The research report concludes by acknowledging that its own recommendations led the test user's account to fill with "polarising and graphic content, hate speech and misinformation."

Show Full Article
TAGS:
Next Story