Home / Story / Deep Dive

Deep Dive: 5 years after George Floyd’s death, did social media companies keep their promises?

Washington, D.C., USA
May 26, 2025 Calculating... read Social Issues & Justice
5 years after George Floyd’s death, did social media companies keep their promises?

Table of Contents

Introduction & Context

In June 2020, the world witnessed unprecedented demonstrations after George Floyd’s murder, with millions rallying for systemic change. Tech giants, under pressure, announced new policies: banning hateful terms, elevating Black voices, pledging significant dollars to diversity initiatives. Community activists, however, suspect many pledges were surface-level. Over time, corporate priorities shifted as the immediate public scrutiny waned. Now, five years on, an investigation reveals that while some platforms introduced policy tweaks or launched outreach programs, structural changes to content moderation, algorithmic accountability, and workforce diversity appear modest or reversed.

Background & History

Before 2020, social media faced criticism over extremist content, racial harassment, and echo-chamber effects. The #BlackLivesMatter movement, which gained traction in 2014–2015, pushed tech platforms to address hateful content. Floyd’s death magnified these calls, prompting big brand advertisers to threaten boycotts if platforms didn’t act. For a while, public pronouncements soared: Facebook created a civil rights advisory board, Twitter labeled violent or racist tweets by high-profile figures, TikTok apologized for “shadow banning” Black creators. By 2023–2024, economic pressures led to layoffs, scaling back many specialized teams. Musk’s Twitter takeover further complicated moderation as staff cuts included many in trust and safety roles.

Key Stakeholders & Perspectives

  • Black Creators & Communities: Continually report instances of content suppression or racial harassment going unchecked. They want robust, transparent moderation plus genuine amplification of diverse content.
  • Social Media Executives: Argue AI-based moderation has advanced, claim to remove millions of hateful posts. Yet they balance free speech concerns and business imperatives, especially after workforce reductions.
  • Civil Rights & Anti-Hate Groups: Demand consistent enforcement, citing research that hate speech soared on certain platforms. They see unfulfilled pledges as a betrayal of 2020 commitments.
  • Regulators & Policymakers: In some regions, pass or contemplate laws mandating stricter content policing. Others worry about stifling free expression. Tech lobbying remains strong, slowing reforms.
  • Users & Advertisers: Some are disillusioned, others accept partial measures. Large brands sometimes pull ads if controversies erupt, but few keep up sustained pressure.

Analysis & Implications

The rollback of civil rights–focused teams indicates that sustaining long-term policy changes can be overshadowed by shifting corporate strategies. Some platforms rely heavily on automated moderation, which can misfire with context-dependent hate speech or be skewed by data sets lacking diverse representation. Meanwhile, Black creators who saw initial promotion in 2020–2021 say their reach has plateaued or declined. Critics label this pattern a “performative cycle,” where platforms vow inclusion during crises, then quietly revert. From a business standpoint, ignoring harassment can degrade user experience and prompt advertiser backlash. Yet policing speech deeply remains a political minefield. The inertia suggests systemic change requires continuous external pressure, not just ephemeral pledges.

Looking Ahead

Activists and watchdogs continue calling for detailed accountability: regular transparency reports, dedicated resources for hate-speech enforcement, unbiased algorithms, and workforce diversity data. Over the next year, pressure might mount if racially charged incidents spark renewed scrutiny. Alternatively, with the pivot to new frontiers (like the metaverse or AI chatbots), the challenge might worsen if biases embed in next-gen tech. Policymakers, especially in the EU, may enforce stricter oversight, forcing global platforms to align or face fines. A best-case scenario sees consistent, well-funded trust and safety teams guided by civil rights experts. But absent sustained activism or regulatory impetus, the 2020 moment might remain a fleeting pivot undone by business-as-usual.

Our Experts' Perspectives

  • Digital rights researchers say meta layoffs of civil rights staff drastically reduced oversight, hampering efforts to identify subtle racism or recidivist offenders.
  • Social media metrics show an uptick in hate speech after major staff cuts—like Twitter’s 2023 wave—contradicting company claims that AI alone can handle moderation effectively.
  • Sociologists note that robust community-led moderation (like volunteer or local groups) can help if platforms provide the tools, but that approach remains patchy.
  • Marketing strategists advise brand clients to keep monitoring platform safety, as corporate values become increasingly important to consumers.
  • Tech ethicists argue platforms must invest in inclusive design from the ground up, ensuring no single “emergency fix” in times of crisis can replace structural accountability.

Share this deep dive

If you found this analysis valuable, share it with others who might be interested in this topic

More Deep Dives You May Like

Texas Set to Ban Social Media for Under-18s, Prompting Privacy, Free Speech Concerns
Social Issues & Justice

Texas Set to Ban Social Media for Under-18s, Prompting Privacy, Free Speech Concerns

L 14% · C 29% · R 57%

Austin, Texas: A near-complete bill would bar minors under 18 from using social media, mandating age verification for all accounts. Advocates...

May 28, 2025 09:41 PM Center
French Farmers’ “Green Law” Protest Brings Tractors to Paris Streets
Social Issues & Justice

French Farmers’ “Green Law” Protest Brings Tractors to Paris Streets

No bias data

Paris, France: Hundreds of farmers drove tractors into the capital, protesting a proposed law to ease environmental rules “less than promised,”...

May 28, 2025 09:41 PM Center
Autistic 6-Year-Old Dragged by Ankle at Illinois Special Education School; Federal Oversight Uncertain After OCR Office Abolished
Social Issues & Justice

Autistic 6-Year-Old Dragged by Ankle at Illinois Special Education School; Federal Oversight Uncertain After OCR Office Abolished

No bias data

Jacksonville, Illinois: ProPublica reports a shocking incident where a 6-year-old non-verbal autistic boy, Xander Reed, was dragged down a hallway...

May 28, 2025 09:41 PM Left