OUR DAILY THREAD: Out Of Sight. Are We Out Of Our Minds?
What you see is what they get
THE SET-UP: There’s a good chance that at this very moment there is an American man sexting with an AI sexbot fully unaware he’s basically sexting with a Kenyan man pretending to be a female sexbot.
That’s what 404 Media recently found when it went to Kenya:
Every day, Michael Geoffrey Asia spent eight consecutive hours at his laptop in Kenya staring at porn, annotating what was happening in every frame for an AI data labeling company. When he was done with his shift, he started his second job as the human labor behind AI sex bots, sexting with real lonely people he suspected were in the United States. His boss was an algorithm that told him to flit in and out of different personas.
“It required a lot of creativity and fast thinking. Because if I’m talking to a man, I’m supposed to act like a woman. If I’m talking to a woman, I need to act like a man. If I’m talking to a gay person, I need to act like a gay person,” he told me at a coworking space I met him at in Nairobi. After doing this for months, he, like other data labelers, developed insomnia, PTSD, and had trouble having sex.
“It got to a point where my body couldn’t function. Where I saw someone naked, I don’t even feel it. And I have a wife, who expects a lot from you, a young family, she expects a lot from you intimately. But you can’t, like, do it,” Asia said. “It fractured a lot of things for me. My body is like, not functioning at all.”
Asia eventually hit a breaking point and stopped working for AI companies.
There’s also a very good chance that someone in Kenya is currently poring over “intimate” video captured when someone in America took off their snazzy new Ray-Ban AI Glasses before having sex. If they set the glasses on a night table or dresser, it’s capturing everything the camera continues to see long after the user took the glasses off. On the other end is a worker struggling with the responsibility of vetting and annotating everything Meta’s glasses see.
That’s what a joint investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten also found when they went to Kenya:
Kenyan data annotators, employed as subcontractors in Nairobi, have revealed they are routinely required to manually review extremely intimate, unanonymized footage captured by unsuspecting users of the Meta Ray-Ban AI glasses.
The disturbing testimonies describe a workforce uneasy about peering into the most private moments of users’ lives, including bathroom visits, people undressing, watching pornography, and explicitly filmed sex acts.
The core of the problem, according to the contractors, is a catastrophic misunderstanding of how the product functions. Users often do not realize that the AI assistant and the associated cameras remain active even when the glasses are removed from the face.
“In some videos, you can see someone going to the toilet or getting undressed. I don’t think they know, because if they knew they wouldn’t be recording,” one worker told the Swedish investigators, speaking on condition of anonymity due to strict confidentiality agreements.
Instead, users are feeding a “hidden stream of privacy-sensitive data” constantly being transmitted “from Western homes to an indistinct hotel in Nairobi” where, as Kenyan news blog Tech-ish.com explained, “thousands of data annotators” are employed by “a major subcontractor to Meta and a familiar name in Kenyan tech labour disputes”:
While these workers are tasked with training the AI by labeling everyday objects such as flower pots, traffic signs, and cars, they are also forced to review the human side of the data collection. They described watching private videos where bank cards were visible by mistake and translating texts where users described graphic sexual desires.
“We see everything – from living rooms to naked bodies. Meta has that type of content in its databases,” an employee said.
These workers are on the business end of the “AI Hype” effect, which two South African investigative journalists described in a new piece for Tech Policy Press:
AI hype is the next chapter in the colonial playbook. It reframes the exploitation of African digital workers as “innovation” and is a tool of power wielded by profiteers of colonial extractivism in the digital age. It functions as a carefully crafted cover story by disguising appropriation in the language of “progress.”
The pair—Marché Arends and Kathryn Cleary—spent a year investigating “the opaque and confusing world of micro-tasking” and “spoke to African digital workers—from Nigeria, to South Africa, to Kenya—who train Large Language Models (LLMs) like ChatGPT.” It’s a crucial component of AI’s success thus far because:
LLMs don’t actually ‘think’, but depend on carefully curated training data that requires human insight and oversight at every step of the development process.
That requires a lot of manpower and that, in turn, is a lot of overhead for a business already grappling with massive overhead in the form of energy- and water-hungry data centers. Their answer appears to be “micro-tasking”:
Micro-tasking is similar to outsourcing but specific to the AI economy—hundreds of thousands of digital workers (also known as “gig” workers), mostly from Majority World countries, are tasked with refining the answers of LLMs and guiding them toward more sophisticated behavior. Their job is to correct errors, shape responses and, at least seemingly, “teach” the models how to perform.
While it’s common knowledge that AI companies need more and more data to keep their branded AI competitive, it’s likely that most have no idea human labor is essential to LLM they interact with every day. But they are and digital workers in Africa are constantly being recruited to help sustain Silicon Valley’s rapidly growing needs.
In effect, labor is being hoarded like data. But when Arends and Cleary noticed a growing number of onboarded workers sitting idle with no tasks in a job that only pays for each accurately completed task, they discovered labor is not only being hoarded … it is also being hedged:
For a year, we studied the mass recruitment strategies of micro-tasking companies, like Mindrift—a subsidiary company of Toloka, which used to be owned by Yandex—to win training contracts with Big Tech firms, like OpenAI. It was tricky to figure out at first, but once we did, we realized it was a standard copy‑paste strategy: hire en masse despite knowing there is not enough work, to create the illusion of scale. Why? Because scale is a signal to investors to keep pouring huge sums of money into AI development.
In our investigation we describe this practice as “labor hedging,” or a form of corporate bench-warming. In other words: signaling abundance to investors, without guaranteeing work, in order to drive profits.
Here in the US, massive investments in data centers and the technological hardware often referred to as “compute” are absolutely being used to “signal abundance to investors” in order to drive up stock prices and market capitalization. If you are not actively pouring money into compute or planning another massive data center, you are obviously falling behind. By the same token:
Idle workers become proof of “capacity”—a kind of collateral that reassures investors the company can grow at a moment’s notice. That illusion feeds hype, and hype inflates stock market value, transforming precarious labor into investor confidence and keeping the AI industry awash with capital.
In the end, human labor is leveraged not for the work it produces, but for the spectacle of abundance in a gambling game of epic proportions—all in feverish pursuit of something that experts say might not even exist: “superintelligent” AI.
Obviously, Africa is a place where Silicon Valley’s AI overlords think they can get away with “hedging” labor. Arends and Cleary compare this to colonialism:
Each person we spoke with had a unique story but a singular, rather eerie, thread ran through them all: Extractive practices underpin LLM training—wages are held below subsistence levels; AI tutors, annotators, and moderators are treated as disposable; and African expertise is appropriated without recognition. This echoes colonial economies of dispossession.
As 404 Media pointed out, this model is particularly troubling for the workers who are not being “hedged.” They have to try to live with everything they see coming from the panopticon of devices surrounding most Americans most of the time:
These workers are required to stare at horrific content for many hours straight with few mental health resources, are largely managed by opaque algorithms, and, crucially, are the workers powering the runaway valuations of some of the richest and most powerful companies in the world.
But it is not going unchecked. After burning out, Michael Geoffrey Asia decided to push back:
He is now the secretary general of a Kenyan organization called the Data Labelers Association (DLA) and the author of “The Emotional Labor Behind AI Intimacy,” a testimony of his time working as the real human labor behind AI sex bots. As part of the DLA, Asia has been working to organize workers to fight for better pay, better mental health services, an end to draconian non-disclosure agreements, and better benefits for a workforce that often earns just a few dollars a day. Data labelers train, refine, and moderate the outputs of AI tools made by the largest companies in the world, yet they are wildly underpaid and haven’t benefitted from the runaway valuations of AI companies.
Ironically, the Africa’s digital workers may ultimately find themselves in common cause with workers in Silicon Valley who, thanks to a growing need to deliver on AI’s promised revolution in productivity, are not being hoarded … they are being let go. - jp
Meta faces fresh storm as moderators complain over content from AI glasses
https://nation.africa/kenya/news/meta-faces-fresh-storm-as-moderators-complain-over-content-from-ai-glasses-5399722#story
Microsoft invests hundreds of millions of dollars in Africa AI push
https://african.business/2026/03/quick-reads/microsoft-invests-hundreds-of-millions-of-dollars-in-africa-ai-push
The Five Countries Capturing 90% of Africa’s AI Funding
https://launchbaseafrica.com/2026/03/25/the-five-countries-capturing-90-of-africas-ai-funding/
High-Performance Computing in Africa: Powering Science and Sustainability
https://aijourn.com/high-performance-computing-in-africa-powering-science-and-sustainability/
Beyond Strategy Documents: Africa’s AI Governance Crisis In Geopolitical Age
https://www.thepointersnewsonline.com/beyond-strategy-documents-africas-ai-governance-crisis-in-geopolitical-age/
Gig workers in Africa have been helping the US military. They had no idea.
https://www.thebureauinvestigates.com/stories/2026-02-23/appen-gig-workers-us-military


