Racist Emojis Are the Latest Test for Facebook, Twitter Moderators – Bloomberg


In a soccer game in Liverpool’s Goodison Park in 1988, player John Barnes stepped away from his position and used the back of his heel to kick away a banana that had been thrown toward him. Captured in an iconic photo, the moment encapsulated the racial abuse that Black soccer players then faced in the U.K.

More than 30 years later, the medium has changed, yet the racism persists: After England lost to Italy this July in the final of the UEFA European Championship, Black players for the British side faced an onslaught of bananas. Instead of physical fruit, these were emojis slung at their social media profiles, along with monkeys and other imagery. “The impact was as deep and as meaningful as when it was actual bananas,” says Simone Pound, director of equality, diversity, and inclusion for the U.K.’s Professional Footballers’ Association.

tech_emojis_02

Barnes back-heels a banana in 1988.

Photographer: Bob Thomas/Getty Images

Facebook Inc. and Twitter Inc. faced wide criticism for taking too long to screen out the wave of racist abuse during this summer’s European championship. The moment highlighted a long-standing issue: Despite spending years developing algorithms to analyze harmful language, social media companies often don’t have effective strategies for stopping the spread of hate speech, misinformation, and other problematic content on their platforms.

Emojis have emerged as a stumbling block. When Apple Inc. introduced emojis with different skin tones in 2015, the tech giant came under criticism for enabling racist commentary. A year later Indonesia’s government drew complaints after it demanded social networks remove LGBTQ-related emojis. Some emojis, including the one depicting a bag of money, have been linked to anti-Semitism. Black soccer players have been frequently targeted: The Professional Footballers’ Association and data science company Signify conducted a study last year of racially abusive tweets directed at players and found that 29% included some form of emoji.

See also  Facebook is free, but should it count toward GDP anyway? - MIT News

Over the past decade, the roughly 3,000 pictographs that constitute emoji language have been a vital part of online communication. Today it’s hard to imagine a text message conversation without them. The ambiguity that is part of their charm doesn’t come without problems, though. A winking face can indicate a joke or a flirtation. Courts end up debating issues such as whether it counts as a threat to send someone an emoji of a pistol.

This matter is confusing to human lawyers, but it’s even more confounding for computer-based language models. Some of these algorithms are trained on databases that contain few emojis, says Hannah Rose Kirk, a doctoral researcher at the Oxford Internet Institute. These models treat emojis as new characters, meaning the algorithms must start from scratch in analyzing their meaning based on context.

“It’s a new emerging trend, so people are not aware of it as much, and the models lag behind humans,” says Lucy Vasserman, who’s the engineering manager for a team at Google’s Jigsaw, which develops algorithms to flag abusive speech online. What matters is “how frequently they appear in your test and training data.” Her team is working on two new projects that could improve analysis on emojis, one that involves mining vast amounts of data to understand trends in language, and another that factors in uncertainty.

BOTTOM LINE –
The fix to a deluge of racist emojis might not be fully automated moderation systems, but more humans making what are often pretty simple judgments.



READ SOURCE

LEAVE A REPLY

Please enter your comment!
Please enter your name here