The race for AI training data has taken a lot of turns over the past few years, from web scraping legal battles to synthetic data experiments to controversial licensing deals with publishers. Metaโs latest move is something different entirely: it is harvesting behavioral data from its own workforce, without giving them a choice in the matter.
Meta has found a new source of training data for its AI models: its own employees. The company plans to use data culled from mouse movements and keystrokes of its staff in its pursuit to build more capable and efficient artificial intelligence. The surveillance tool is called the Model Capability Initiative and will record the screens of employees as they go about their work. The company will also reportedly increase its internal data collection efforts as part of its AI for Work program, which has apparently been renamed the Agent Transformation Accelerator.
Metaโs stated rationale is functional rather than sinister on its face. A Meta spokesperson said: โIf weโre building agents to help people complete everyday tasks using computers, our models need real examples of how people actually use them, things like mouse movements, clicking buttons, and navigating dropdown menus. A memo framed the effort as a way for employees to improve company models in areas where they struggle to emulate basic computer-use behaviors, and told staff that they can do their part to help by just doing their daily work.
That framing, โjust do your job and youโre helping,โ is precisely what makes this uncomfortable. Employees are not volunteers contributing to a research project. They are being passively enrolled as data sources for a commercial AI product pipeline, with no ability to decline. Meta CTO Andrew Bosworth confirmed that employees cannot opt out.
US federal law places no limit on worker surveillance, meaning Meta employees have no legal right to opt out. In contrast, European law, including the EUโs General Data Protection Regulation, would likely prohibit equivalent monitoring, effectively restricting the program to the United States for now. That jurisdictional asymmetry is telling. Meta is doing in the US what it likely could not legally do in Europe, and the absence of a legal barrier appears to be the primary factor enabling it.
The broader context sharpens the picture considerably. Meta plans to lay off roughly 8,000 employees, about 10 percent of its global workforce, starting May 20, while committing up to $135 billion in capital expenditure for 2026, more than double its 2025 spending. So the company is simultaneously extracting behavioral data from its workforce to build autonomous AI agents and preparing to cut a significant portion of that workforce. The optics are, at minimum, awkward. At maximum, they are an unusually candid illustration of where enterprise AI investment is actually headed.
The broader goal appears to be building AI agents capable of performing white-collar tasks on their own, the exact software Meta is racing to ship amid competition from OpenAI and Anthropic. Meta acquired a 49 percent stake in data-labeling firm Scale AI last year for more than $14 billion, and Scaleโs former CEO Alexandr Wang now leads Meta Superintelligence Labs. The keystroke initiative fits within a broader, well-funded push to close the gap between AI that can answer questions and AI that can actually do work.
According to Yale law professor Ifeoma Ajunwa, keystroke logging extends surveillance to white-collar workers at a level previously experienced mainly by delivery drivers and gig workers. That observation matters. The implicit social contract in knowledge work has always included a degree of professional autonomy, the idea that how you get your work done is, within reason, your own business. What Meta is doing, and what other companies will likely follow given the competitive pressure, erodes that boundary in ways that employment law has not yet caught up with.
Meta says the data will not be used for performance assessments and that safeguards are in place to protect sensitive content. Those assurances may be genuine. They are also unverifiable by the employees whose behavior is being captured, and they can be revised at any time.
The most honest reading of this situation is that Meta has found a cost-free, legally unencumbered source of high-quality behavioral training data and is using it. The employees generating that data have no say, no compensation arrangement, and no real recourse. That may be entirely legal. It is also a preview of a dynamic that is going to become much more common, and much more contested, as companies build out the agentic AI systems they are betting their next decade on.

