The Anti-Social Network: social media companies face reckoning over hate speech

Facebook, Twitter and other platforms are drawing criticism for their failure to tackle hate content. But will the hit to their reputation do any lasting commercial damage?

In April 2021, English football announced a boycott of social media. Players, coaches and pundits from across the sport shunned Twitter, Facebook and Instagram for four days in protest against racism on these platforms. Corporate sponsors including Adidas and Barclays also took part in the boycott.

A report from Kick It Out, English football’s equality and inclusion organisation, illustrated why such action was necessary. It found a significant increase in racist and homophobic abuse of those involved in the sport since the beginning of the 2019-’20 season. Many social media users cited “unsatisfactory responses” from the big platforms after they had made initial complaints about hate speech.1

The boycott was just the latest episode in a wider backlash against social media companies. In July 2020, more than 1,000 prominent advertisers launched a month-long boycott of Facebook as part of the #StopHateForProfit campaign, pressing the firm to do more to stamp out racist content in the wake of George Floyd’s murder and the Black Lives Matter protests.2

Despite these controversies, social media companies continue to enjoy the confidence of the market. Share prices have risen in line with the wider tech sector amid growing demand for online tools, even as the bricks-and-mortar economy suffers under COVID-19 restrictions. But as advertisers pull out, users log off and regulators circle, some investors are warning the persistence of hate speech on social media could yet pose a serious threat to the future of the tech giants.

From dial-a-hate to the Twitter feed
Throughout history, advances in communications technology have enabled new forms of hate speech. In the 1960s, for example, extremist groups in the US set up automated voice messages connected to phone lines, broadcasting their views to a wide audience.3

The so-called “dial-a-hate” phenomenon drew the attention of Congress. Prevented from banning the recordings by First Amendment laws protecting freedom of speech, policymakers put pressure on telecoms company AT&T to tackle the issue. The company argued it was powerless to regulate the activity of private individuals on its phone lines.4

Today, social media giants such as Facebook and Twitter make similar arguments when criticised for the content that appears on their platforms. But hate speech is a far bigger problem in the Internet era, when millions of people around the world can meet and instantaneously exchange information – or intimidate, bully or harass.

As defined by the United Nations, hate speech encompasses “any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are”. This might include their religion, ethnicity, nationality, race, gender, sexuality or any other identity factor.5

In 2015, countries around the world committed to tackling the problem as part of the UN’s 17 Sustainable Development Goals, many of which affirm the right to freedom of expression and protection from harassment. For example, SDG 16 aims to “promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels”.6 But moving from commitment to practice has proved more difficult.

“It’s clear that although countries have committed to protect people from harassment, the online reality is unfortunately quite different. Neither companies or governments have found a way to tackle online hate speech, but we expect to see both sides take more action as pressure increases from customers and voters,” says Marte Borhaug, global head of sustainable outcomes at Aviva Investors.

Hate speech has real-world effects beyond the Internet, making it a fundamental human rights issue. In Germany, a correlation was found between anti-refugee posts on Facebook by the far-right Alternative für Deutschland party and physical attacks on refugees.7

“The 24/7 nature of social media, the amplification of content through sharing, clearly exacerbates the impact of these kinds of messages on wider society,” says Louise Piffaut, ESG analyst at Aviva Investors. “From hate speech to bullying, extremism to misinformation, there is a lot of content here that damages communities.”

The perpetrators of racist mass shootings in the US and elsewhere have publicised their acts to supporters on the major social media sites and even used the platforms to broadcast videos of their crimes. The shooter who murdered 51 people at two mosques in Christchurch, New Zealand in March 2019 streamed a video of the attacks using Facebook Live, and clips of the footage spread quickly across Facebook and YouTube.8

While this sort of activity tends to get taken down relatively swiftly, Facebook only blocked white nationalist content as a matter of policy in the immediate aftermath of the Christchurch attacks.9 YouTube and Twitter allowed Ku Klux Klan leader David Duke to post on their networks for years before finally banning him in 2020.10

Social media firms have global reach and hate speech is a global problem. In Myanmar, military personnel used Facebook to spread propaganda demonising Rohingya Muslims ahead of a campaig/n of ethnic cleansing, according to a UN investigation. In India, lynch mobs have used Facebook-owned messaging service WhatsApp to coordinate attacks.11

Please read here the full article »

References:

  1. ‘Discrimination in football on the rise,’ Kick It Out, September 3, 2020
  2. Tiffany Hsu and Eleanor Lutz, ‘More than 1,000 companies boycotted Facebook. Did it work?’, The New York Times, August 1, 2020
  3. Steven Melendez, ‘Before social media, hate speech spread by phone’, Fast Company, February 4, 2018
  4. Steven Melendez, ‘Before social media, hate speech spread by phone’, Fast Company, February 4, 2018
  5. ‘United Nations strategy and plan of action on hate speech’, United Nations Office on Genocide Prevention and the Responsibility to Protect, May 2019
  6. ‘Goal 16: Promote peaceful and inclusive societies for sustainable development, provide access to justice for all and build effective, accountable and inclusive institutions at all levels’, United Nations Department of Economic and Social Affairs Sustainable Development, 2020
  7. Zachary Laub, ‘Hate speech on social media: Global comparisons’, Council on Foreign Relations, June 7, 2019
  8. Billy Perrigo, ‘”A game of Whack-a-Mole.” Why Facebook and others are struggling to delete footage of the New Zealand shooting’, Time, March 16, 2019
  9. Liam Stack, ‘Facebook announces new policy to ban white nationalist content’, The New York Times, March 27, 2019
  10. Lois Beckett, ‘Twitter bans white supremacist David Duke after 11 years’, The Guardian, July 31, 2020
  11. Zachary Laub, ‘Hate speech on social media: Global comparisons’, Council on Foreign Relations, June 7, 2019

© 2021 funds europe

Advertisement

HAVE YOU READ?

THOUGHT LEADERSHIP

The tension between urgency and inaction will continue to influence sustainability discussions in 2024, as reflected in the trends report from S&P Global.
FIND OUT MORE
This white paper outlines key challenges impeding the growth of private markets and explores how technological innovation can provide solutions to unlock access to private market funds for a growing…
DOWNLOAD NOW

CLOUD DATA PLATFORMS

Luxembourg is one of the world’s premiere centres for cross-border distribution of investment funds. Read our special regional coverage, coinciding with the annual ALFI European Asset Management Conference.
READ MORE

PRIVATE MARKETS FUND ADMIN REPORT

Private_Markets_Fund_Admin_Report

LATEST PODCAST