Skip to main content
thinking

Meta’s moderation changes

by Phil Aiston

Pexels Julio Lopez 75309646 17614476
A hand holds a phone with a large white @ symbol on a black square. Behind, is Meta's logo. Image by Julio Lopez on Pexels.

What they mean for charities and how to respond...

Meta’s announcement to overhaul its moderation policies, including replacing fact-checkers with community notes, signals a significant shift that charities must prepare to navigate. Let’s call it what it is: a political cop-out to avoid taking responsibility for the disinformation their platforms facilitate. The implications for charities could be significant, particularly around community management, trust, and the additional resource burden this creates.

Meta’s approach: political strategy or public abdication?

Let’s be clear, Meta’s decision to replace third-party fact-checkers with community notes is political, designed to deflect accusations of “censorship” while sidestepping their responsibility (and cost) for combating hate speech and misinformation. As Daniel Patrick Moynihan famously noted, “Everyone is entitled to his own opinion, but not his own facts.” Meta’s changes risk undermining this principle. Community notes place the onus on users to moderate content—that’s right—you, me, and everyone else who has better things to do than police the internet for free. Remember that viral video from an osteopath claiming Covid vaccines would alter your DNA? Facebook moderators removed 12 million posts of misinformation like that between March and October 2020 alone. Now? Well that’s Joe-public’s job, raising questions about safety and accountability.

Compounding this are Meta’s immediate policy updates that allow for derogatory language targeting women and LGBTQ+ individuals. These changes risk exacerbating hate speech and fostering an unsafe environment on platforms where many charities engage with vulnerable audiences.

Meta will also “recommend more political content based on ... personalised signals” – which sounds a lot like suring-up echo chambers to me. And who fact checks who in an echo-chamber? Who fact checks the AI nonsense content in a closed group? (that's a whole blog for another day…)

In his speech, Zuckerberg does appear to acknowledge there is some risk involved - he said in his video the changes would mean "a trade off" - "It means we're going to catch less bad stuff", he said. Given the absolutely massive global scale of Meta's digital ecosystem (3.3bn active users according to Meta), this is not an insignificant statement given the potential volume of ‘bad stuff’ they’ll miss!

Global vs. local context

Notably, Meta’s US-only policy changes are at odds with regulatory trends in the UK and EU, where big tech firms face increasing pressure to take responsibility for content or face steep penalties. However, this regulatory divergence does little to mitigate the broader challenges charities will face as moderation standards weaken globally.

Challenges for charities

  • Community management: With weaker moderation, we’ll be dealing with more harmful content—content that can alienate our audiences and erode trust by overtly challenging accurate information charitable organisations put out.

  • Increased misinformation: False claims about health, social issues, or a charity’s mission can spread unchecked, damaging reputations and reducing public confidence.

  • Resource strain: Managing content and addressing crises will require more time, expertise, and funding—resources that many charities already lack.

Practical steps to take

Anyway, the reason you're here is because you're thinking "so what do I do about all of this?" - how can charities respond to this evolving landscape?

1. Strengthen networks

Build networks, and find close advisors or peers who will be going through something similar. As someone who has to deal with Meta Support frequently, believe me when I say you're not going to get easy help from them when you run into issues.

Don't try to tackle this in a silo, engage all of your business stakeholders, and engage with other similar nonprofits, other marketing peers, and digital specialists in the marketing sector who you can bounce thoughts off.

2. Crisis management planning

Consider formal crisis training for moderators to prepare for the new challenges posed by user-led content moderation, and proactively plan for potential crises by establishing clear protocols.

Importantly, you may need to decide how to approach out-of-hours channel management procedures, if that’s relevant to your organisation.

Escalations will happen, and to be ready for this everyone should know the escalation points, and crucially the sign-off process (or the empowerment a social team has) in responding to misinformation.

3. Diversify platforms

Consider investing in alternatives like TikTok, which is increasingly popular among key donor demographics. In my 2025 trends post, I’ve mentioned how TikTok diversification here is something nonprofits could already be thinking about, given the presence of more of the typical donor profiles for charities that we're biased into thinking aren't actually active on TikTok. BlueSky saw a massive surge in users in the wake of an X exodus (an Xodus?), it’s still early days on the platform, but it’s definitely  one-to-watch.

4. Enhance internal policies

Prepare for increased scrutiny and misinformation by establishing robust internal policies. Monitor platforms more closely (and perhaps explore if a social listening solution is right for your org.) to identify and address harmful content early. In tandem, educate teams about spotting misinformation and managing reputational risks.

5. Investigate technology solutions

In the immediate aftermath of this news, I’ve seen the corporate sector looking at ways of lightening the moderation burden with AI - trollwall.ai is one I’ve heard come up a few times (“TrollWall simplifies community moderation while effectively guarding against toxic comments across social media platforms.” - no affiliation). When considering AI solutions you may also want to think about environmental impact and if it's acceptable for your organisation. Going back to networking, reach out to anyone you see who might be using solutions like these and ask directly how they’re finding them.

6. Advocate for change

Charities can play a role in pushing for stronger regulatory frameworks and holding platforms accountable. Highlighting the real-world consequences of weakened moderation can amplify the sector’s voice.

Meta’s underlying motives ongoing

Given that the discourse here is largely about facts, let’s observe two absolute facts, 1) “community notes” produced by users are cheaper than paying moderation divisions, 2) the incoming US political leadership dislikes fact-checking. The announced change, therefore, is just less bother for Meta all-round, and everything else is just framing or optics. Meta’s no.1 priority has been, and always will be, doing what’s best for Meta.

 If you want to join me under my tinfoil hat and consider why Meta may be cosying up to the new US Adminstration, let's just keep an eye on what happens with the pending FTC antitrust lawsuit and the effort to unwind Meta’s acquisition of Instagram, and whether or not that still exists in a few months time…

Looking ahead

Meta’s changes are yet another headache we didn’t ask for, but there are ways to adapt. Get your crisis plans in order, lean on your networks, and don’t hesitate to test diversifying your platforms.

And remember, the charity sector’s strength has always been in its resilience and collaboration.