Tech workers in China are increasingly being encouraged by employers to train Artificial Intelligence agents on their workflows, turning daily tasks, judgment patterns, and communication habits into reusable instructions for automation. A spoof GitHub project called Colleague Skill crystallized that anxiety by showing how workplace data from tools like Lark and DingTalk could be used to generate manuals describing a coworker’s duties and quirks for an agent to imitate. Though created as satire, the project resonated because many workers say managers are already pushing them to document their processes for agent tools such as OpenClaw and Claude Code.
The idea has spread at a moment when agent tools are attracting strong interest in China, even as their business usefulness remains limited. Workers say these systems can perform a range of computer-based tasks, but still need supervision and often fall short in real workplace settings. That has made detailed documentation more valuable to companies seeking to close the gap between current tools and reliable automation. One researcher said firms gain both practical experience with the tools and richer internal data about employee know-how, workflows, and decision patterns, helping them identify which parts of work can be standardized and which still depend on human judgment.
For employees, the process can feel dehumanizing. Amber Li, 27, a tech worker in Shanghai, used Colleague Skill to recreate a former coworker and found that, within minutes, it produced a detailed description of how that person worked, including stylistic habits. She said the result was impressive but unsettling, especially because it turned a colleague’s behavior into something an instant-response Artificial Intelligence “coworker” could mimic. Another software engineer said training an Artificial Intelligence system on their own workflow felt reductive, flattening their work into modules in a way that made replacement seem easier.
Pushback is starting to emerge alongside adoption. Irritated by the logic of “distilling” a person into a skill, Koki Xu, 26 an Artificial Intelligence product manager in Beijing, published an “anti-distillation” tool on GitHub on April 4. The tool, which took Xu about an hour to build, rewrites workflow material into vague, non-actionable language, with different sabotage levels depending on how closely a boss is watching. A video Xu posted about the project went viral, drawing more than 5 million likes across platforms. Xu said the debate raises not only labor concerns but also legal questions, because workplace records may belong to a company while personality, tone, and judgment are harder to classify as corporate property.
Even among enthusiastic users of automation, the mood is conflicted. Xu herself uses multiple OpenClaw agents, and Li said her employer still has not found a way to replace actual workers because the tools remain unreliable and require constant oversight. Yet the broader shift is already changing how workers perceive their own value. Li said she does not think her job is immediately threatened, but she does feel that her value is being cheapened.