MENLO PARK, CA — For a company that began 2025 promising “more speech and fewer mistakes,” Meta’s year is ending on a note of confusion, accusation, and silence.
In recent weeks, a growing chorus of users—from international health organizations to small business owners—have alleged that Facebook and Instagram are selectively targeting specific communities for restriction while leaving others untouched. What was sold as a new era of “Community Notes” and automated efficiency has curdled into what critics are calling a “black box” of enforcement, where the difference between a thriving page and a disabled account often comes down to an opaque, unappealable decision.
The December Purge
The most explosive allegation comes from a coalition of over 50 reproductive health and LGBTQ+ organizations, which reported a coordinated wave of account restrictions in early December.
According to reports from The Guardian and the Business & Human Rights Resource Centre, pages providing information on abortion access and sexual health—even in countries where these services are fully legal—were suddenly scrubbed from the platform or shadow-banned.
- The Targets: The sweep included abortion hotlines in Europe, sex education platforms in the Middle East, and queer advocacy groups in Latin America.
- The Explanation: When reasons were given, they were often vague citations of “human exploitation” or “prescription drug” policies. In many cases, no specific post was cited.
- The contradiction: This crackdown stands in stark contrast to Meta’s announcement in January 2025, where CEO Mark Zuckerberg promised to relax restrictions on controversial topics like gender identity and immigration to foster “robust political debate.” Critics argue this represents a “bait-and-switch,” where policy was loosened for political rhetoric but tightened on vulnerable communities providing essential services.
“It feels less like moderation and more like erasure,” said a representative from a suspended European health non-profit. “We aren’t selling drugs; we are providing legal medical information. Yet we are treated like cartels, while actual disinformation spreads unchecked.”
The AI “Black Box” Crisis
While the targeting of health groups suggests a specific policy choice, a broader, messier problem has plagued the platform throughout 2025: the “AI Moderation Crisis.”
As Meta moved away from human moderators to cut costs, it leaned heavily on AI “classifiers” to police content. The result has been a flood of “false positives” that feels, to the average user, like picking and choosing.
- Small Businesses Decimated: Throughout the fall, thousands of small business owners reported losing their ad accounts and pages overnight due to “suspicious activity” flags that could not be appealed.
- The “Child Safety” Dragnet: In an attempt to tighten safety for minors, Meta’s AI began aggressively flagging harmless content—such as parents posting photos of their kids or businesses selling children’s clothing—as “potential exploitation.”
- The Appeal Loop: Perhaps the most frustrating aspect for users is the disappearance of human support. Users describe a “doom loop” where they are asked to submit ID, only to be rejected by an automated email seconds later.
The “Cross-Check” Shadow
Looming over these restriction stories is the ghost of the “Cross-Check” program—a secret internal system revealed years ago that exempted high-profile VIPs from standard moderation rules.
While the Oversight Board has pushed for transparency, the events of late 2025 suggest a new, inverted version of this inequality. Instead of just protecting the powerful, the system now appears to be disproportionately punishing the specific:
- Political Rhetoric: Protected under new “free expression” guidelines.
- Health & Advocacy: Heavily policed under “safety” guidelines.
- Average Users: Subject to the whims of a buggy AI with no recourse.
The “Community Notes” Gamble
Meta’s defense relies on its pivot to a “Community Notes” model (similar to X/Twitter), which fully launched in the U.S. this year. The argument was that the community should police itself rather than Meta acting as the arbiter of truth.
However, the December restrictions were not driven by community notes; they were top-down removals. This has led to the accusation that Meta is “picking and choosing” when to let the community decide (political arguments) and when to intervene with a heavy hand (social/health issues).
As 2026 approaches, the question for the 3 billion users on Meta’s platforms is simple but terrifying: Are you next? And if you are, will there be a human being left to listen to your appeal?
Author Profile

Latest entries
AIDecember 17, 2025The Algorithm’s Arbitrary Hand: Inside Facebook’s “Picked and Chosen” Restriction Crisis
NBADecember 16, 2025Nikola Jokic and Jamal Murray Outlast Rockets in Overtime Thriller
NBADecember 16, 2025Cooper Flagg Erupts for 42 Points, Shatters LeBron’s Teenage Scoring Record
NFLDecember 15, 2025Playoff Bound: Stafford and Nacua Power Rams to Comeback Win Over Lions

Steelersforever.org