Twitter wanted to copy Onlyfans and failed because of child pornography

sex for money
Twitter wanted to become the new Onlyfans – and failed because of its child pornography problem

The market for paid sexual content online is large

© PA/ / Picture Alliance

With a subscription model for sex content, Twitter was finally planning to make real money. But the plan failed. Because the group couldn’t get the problem of illegal content under control.

In the last few months, Twitter has made headlines with one thing in particular: the dispute with Elon Musk over an actually agreed takeover. If the group had implemented its plan at the time in the spring, the situation might have been completely different. After long tolerating sex content there, Twitter may have wanted to go all out. And establish yourself as a competitor to Onlyfans.

This reports “The Verge”, citing insiders and internal documents of the company. According to this, some members of the leadership had tried to get Twitter to monetize the sex and porn content that was already widespread there. The logic: Twitter is already one of the most important advertising platforms for the numerous Onlyfans models, but the money is earned by the erotic service and not by Twitter. By giving models paid subscriptions to sex content right on Twitter, the company could make that money itself, argued the feature’s advocates at the company.

Onlyfans’ meteoric rise

The desire is perfectly understandable. Although Onlyfans is only a few years old, the company is on track to hit $2.5 billion in sales this year. That’s almost half of what Twitter took in last year. Unlike the competitor, Onlyfans is profitable.

The newcomer will soon catch up with Twitter in terms of user numbers: after 16 years, the short message service has almost 450 million monthly active users, Onlyfans after only six years to 280 million – and the trend is rapidly increasing. And while Twitter is primarily dependent on advertising revenue, over seven million Onlyfans users are paying subscribers to one or more of the models.

Detailed exam

No wonder, then, that Twitter was serious about the considerations. A so-called “Red Team” made up of 84 employees was set up to implement the “ACM” (Adult Content Monetization) project. They should work out what a Twitter geared towards paid sex content could look like that would still be safe and responsible to use. “The Verge” claims to have learned this from the documents and discussions with employees. But the result of this feasibility study may not please Twitter at all: It not only prevented the implementation of the ACM project. Instead, it raised a very fundamental problem for the group.

“Twitter cannot reliably detect child sexual exploitation or involuntary nudity,” read the Red Team’s scathing verdict in April. This is still possible on a small scale, but the methods cannot be scaled up to the required number of users. To make matters worse, the group has no reliable way of checking the age of the users and the creators of the content in order to prove that they are of legal age.

The team found that the problem already existed. A program like ACM would likely exacerbate it. Because the content behind a paywall could hardly be discovered from outside, the group could not rely on reports from outside. But the internal tools were not sufficient, according to the internal investigation. The project was frozen until appropriate security measures were put in place.

old problems

In fact, there have been allegations in the past that Twitter had repeatedly been used to distribute child pornography without the company noticing. This problem was also known internally, reports “The Verge” with reference to the documents. As early as February 2021, an internal report complained that the amount of abuse material was increasing sharply, but that the means to combat it were stagnating. The situation hasn’t improved since then, employees told the magazine.



Onlyfans star Dare Taylor used to be at Disney World

The group confirms the existence of the team. The Red Team was “part of a debate at the end of which work was paused for the right reasons,” a spokeswoman for The Verge confirmed. The company has “zero tolerance for the exploitation of children,” she said. “We fight aggressively against child abuse online and have invested significantly in technology to implement this policy.”

Twitter is not alone, but…

Twitter is not alone with its problem. All major tech companies have to defend themselves against illegal content, which is becoming increasingly difficult to detect as the number of active users increases. But while Facebook and Co. have continued to automate the process, according to the employees, Twitter’s system is based primarily on a manual check of the content. To make matters worse, users cannot explicitly mark content as sexually problematic, but only as “sensitive”. But this also applies to content that is otherwise disturbing, such as drastic footage of accidents or war reports. But the budget is also an issue. Back in 2019, Mark Zuckerberg boasted that Facebook spends more on the security of its users than Twitter earns overall.

Incidentally, the employees do not expect any improvement from the upcoming purchase by Elon Musk. On the contrary: After Musk declared fake accounts and so-called bots to be the network’s main problem, the so-called “Health” team, which is responsible for discovering sexually problematic content, was last week integrated into the spam account search team integrated. The team members are devastated. “It’s a punch in the pit of the stomach,” they say.

Sources: The Verge, axios, pc mag

source site-5