The Moment I Realised Moderation Doesn't Scale

I came across a detail in a recent lawsuit that stopped me cold.

Researchers tried to create a test account on a major open world using the name "Jeffrey Epstein." They couldn't. The name was taken. So were over 900 variations of it. Accounts like "JeffEpsteinSupporter" had earned badges for spending time in children's games. Others had usernames that explicitly referenced grooming and abuse.

This wasn't an edge case. Games called "Escape to Epstein Island" were accessible to accounts registered as under 13. Over 600 games featuring references to another convicted criminal were similarly available to children.

That's when it clicked for me. This isn't a moderation problem you can solve with bigger teams or better automated filters. When you have millions of children interacting with millions of strangers, you fundamentally can't protect them all. The architecture creates the vulnerability.

The numbers bear this out. The same platform reported over 13,000 instances of child exploitation in 2023 alone and responded to 1,300 law enforcement requests. Those are the ones they caught. How many did they miss?

A former employee was quoted in legal filings saying the quiet part out loud: "You can keep your players safe, but then there would be less of them on the platform. Or you just let them do what they want to do. And then the numbers all look good and investors will be happy."

That trade-off shouldn't exist!

Ian and I keep coming back to the same question:

What if you didn't put children in rooms with millions of strangers in the first place?

What if the starting point was curated groups where adults actually know the children involved?

That's how oodlü works. Parents, guardians, or teachers create and oversee groups of children. Communication happens within those groups, not with the broader world. If there's a disagreement or issue, a child can report it to their actual adult, someone who knows them, knows the context, and can step in personally to help resolve it in the real world.

It's not automated moderation trying to catch problems after they happen. It's not remote teams reviewing flagged content with no context about the children involved. It's adults who are already responsible for those children being given the tools to stay involved.

Does this limit the size of the network each child can access? Yes. Absolutely. But that's the point. The trade-off between safety and scale is real, and we're choosing differently. We're building around groups where trust already exists, rather than trying to retrofit trust onto a network of strangers.

We're still learning. We'll get things wrong. But we're asking different questions from the start, and that changes what becomes possible.

We'd love to hear your thoughts on this. Find us on the social channels linked at the top of the page.

Previous
Previous

Safety Features Nobody Uses Aren't Really Safety Features

Next
Next

Why We're Building This