During a week this summer, Taylor and her roommate attached GoPro cameras to their heads while they painted, sculpted, and handled daily chores. Their goal was to help train an AI vision system, making sure their recordings were synchronized so the AI could observe the same actions from different perspectives. The job was demanding in several respects, but the compensation was generous—and it let Taylor dedicate much of her time to creating art.
“We’d get up, go through our morning routine, then put on the cameras and sync up the clocks,” she explained. “After that, we’d make breakfast and wash up. Then we’d split up and focus on our art projects.”
They were expected to deliver five hours of synchronized video each day, but Taylor soon realized she needed to set aside seven hours daily to allow for breaks and to recover physically.
“It would give you headaches,” she recalled. “When you took it off, you’d have a red mark on your forehead.”
Taylor, who preferred not to share her surname, was freelancing as a data contributor for Turing, an AI firm that connected her with TechCrunch. Turing’s aim wasn’t to teach the AI to paint, but to help it develop broader abilities in visual reasoning and step-by-step problem-solving. Unlike language models, Turing’s vision system would be trained exclusively on video content—most of which would be sourced directly by Turing.
In addition to artists like Taylor, Turing is also recruiting chefs, builders, and electricians—essentially anyone whose work involves manual skills. Sudarshan Sivaraman, Turing’s Chief AGI Officer, told TechCrunch that gathering data by hand is the only way to achieve the variety needed in their dataset.
“We’re collecting data from a wide range of blue-collar professions to ensure diversity during pre-training,” Sivaraman explained to TechCrunch. “Once we’ve gathered all this material, the models will be able to interpret how different tasks are carried out.”
Turing’s approach to building vision models reflects a broader trend in the AI industry’s relationship with data. Instead of relying on data scraped from the internet or gathered by low-wage annotators, companies are now investing heavily in carefully selected, high-quality data.
With AI’s capabilities already proven, businesses are turning to exclusive training data as a way to stand out. Rather than outsourcing, many are now handling data collection internally.
Fyxer, an email company that uses AI to organize messages and compose responses, is one such example.
After initial trials, founder Richard Hollingsworth realized the most effective strategy was to use several smaller models, each trained on very specific data. While Fyxer builds on an existing foundation model—unlike Turing—the underlying principle is similar.
“We found that the performance really hinges on how good the data is, not just how much you have,” Hollingsworth said.
This led to some unusual staffing decisions. In the company’s early days, Hollingsworth noted, there were times when executive assistants outnumbered engineers and managers four to one, as their expertise was crucial for training the AI.
“We relied heavily on skilled executive assistants because we needed to teach the model the basics of which emails deserved a reply,” he told TechCrunch. “It’s a challenge that’s all about people. Finding the right talent is tough.”
Data collection continued at a steady pace, but over time, Hollingsworth became more selective, favoring smaller, more refined datasets for post-training. As he put it, “the quality of the data, not the quantity, is the thing that really defines the performance.”
This is especially important when synthetic data is involved, as it both expands the range of training scenarios and amplifies any weaknesses in the original data. Turing estimates that 75% to 80% of its vision data is synthetic, generated from the initial GoPro recordings. This makes maintaining the quality of the original footage even more critical.
“If your pre-training data isn’t high quality, then any synthetic data you generate from it will also fall short,” Sivaraman points out.
Beyond just quality, there’s a strong business case for keeping data collection in-house. For Fyxer, the effort put into gathering data is one of its strongest defenses against competitors. As Hollingsworth sees it, while anyone can use an open-source model, not everyone can assemble a team of expert annotators to make it truly effective.
“We’re convinced the right approach is through data,” he told TechCrunch, “by developing custom models and using high-quality, human-curated training data.”
Correction: An earlier version of this article misidentified Turing. TechCrunch apologizes for the mistake.


