Hello and welcome to Eye on AI…In this edition: The cost of AI “workslop”…Nvidia’s investment in OpenAI….and Google DeepMind eyes a new AI risk.
Hi, it’s Beatrice Nolan here, filling in for Jeremy Kahn, who is out today. I’ve spent a lot of time recently thinking about the promise of AI-fueled productivity in the workplace, especially after that MIT report found that the majority of companies’ AI pilots weren’t living up to that promise.
In the past year, the number of companies running entire workflows with AI has almost doubled, while overall workplace use has also doubled since 2023. Despite the dramatic uptake of the tech, a recent MIT Media Lab study still found that 95% of the organizations embracing AI weren’t seeing a clear return on those investments.
Some investors, already nervous about an “AI bubble,” choose to see the report as an indictment of AI as a whole. But, as Jeremy pointed out at the time, the report actually put the blame for the lack of productivity gains on a “learning gap”—people and organizations not understanding how to use the AI tools properly—rather than an issue with the performance of the technology itself.
New research suggests an alternative explanation: that the presence of AI in the workplace may actually be dragging down productivity. According to a recent and ongoing survey from BetterUp Labs in collaboration with Stanford University’s Social Media Lab, some employees are using AI to create low-effort “workslop,” which is time-consuming to clean up.
Workslop, a term coined by the researchers and based on AI-generated “slop” you can find clogging your social media feeds, is defined as “AI-generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.”
The presence of workslop itself is not new. There are already reports that it’s creating a niche economy of its own, with some freelance workers reporting that they’re being hired—often at premium rates—to clean up the sloppy copy, clunky code, and awkward images that AI leaves behind.
What the new research does show is just how pervasive and costly workslop has become within organizations.
The cost of AI-generated work
Out of 1,150 full-time U.S. employees surveyed, 40% said they’ve encountered workslop in the past month. Just under half of this low-quality work is exchanged between colleagues at the same level. A further 18% of respondents said they received it from direct reports, while 16% said it came from managers or people higher up the corporate ladder.
Far from speeding up workflows, this AI-generated slop created more work, employees said. According to the research, employees spent just shy of two hours dealing with each piece of AI-generated work. Based on time spent and self-reported salaries, the researchers calculated that workslop could cost single employees $186 per month. For an organization of 10,000 workers, this could mean over $9 million a year in lost productivity.
The incidents have morale costs as well, with employees reporting being annoyed, confused, and offended when they receive the low-quality work. According to the research, half of the people surveyed viewed colleagues who produced workslop as less creative, capable, and reliable. They were also seen as less trustworthy and less intelligent.
Overall, employees receiving low-quality work were less inclined to collaborate with their colleagues.
Why workslop happens
Some level of AI-slop is a natural byproduct of current AI models. LLMs are designed to generate content quickly by predicting the most likely next word or pattern, not to guarantee originality or meaningful insight. Models also hallucinate, which can impact the accuracy of AI-generated work.
But the new research does point to a lack of employee understanding—or care—when it comes to using AI tools. Top-down AI mandates from leadership often emphasize experimentation without providing clear guidance. And while experimentation is part of adopting new tech, encouraging AI usage without direction can pressure employees to produce output even when it’s inappropriate.
So how do companies stem the tide of workslop? The researchers’ suggestions include more guidelines for when and how AI should be used, encouraging purposeful, rather than shortcut-focused, use of the tech, and fostering collaboration and greater transparency between employees on AI use. Without these measures, companies rushing to adopt AI risk creating more friction than efficiency.
With that, here’s more AI news.
Beatrice Nolan
FORTUNE ON AI
‘Every copilot pilot gets stuck in pilot’—unless companies balance data security and innovation, say experts — Sharon Goldman
Exclusive: Former Google DeepMind researchers secure $5 million seed round for new company to bring algorithm-designing AI to the masses — Jeremy Kahn
Trump’s $100,000 H-1B fee could choke off startups’ access to AI talent and widen Big Tech’s dominance — Beatrice Nolan
How Sarah de Lagarde, who lost two limbs in a train accident, is using AI to promote accessible new tech—including her “kick-ass robot arm” — Aslesha Mehta
EYE ON AI NEWS
Nvidia plans a billion dollar investment in OpenAI. Hot off the heels of its $5 billion pledge to former rival Intel, Nvidia is set to invest up to $100 billion in OpenAI. Under the partnership, Nvidia will supply at least 10 gigawatts of systems, with the first gigawatt expected online in 2026. CEO Jensen Huang called it just the beginning, promising far more compute capacity to come. However, some investors are warning of “circularity” in Nvidia’s business strategy, where the company boosts demand for its AI chips by investing in startups like OpenAI, which then use that funding to purchase even more Nvidia hardware. You can read more here.
Experts call for AI ‘Red Lines.’ Over 200 experts, including 10 Nobel laureates, AI pioneers from OpenAI, Google DeepMind, and Anthropic, and former world leaders, have called for international “red lines” on AI development by the end of 2026. The signatories warned that AI’s “current trajectory presents unprecedented dangers,” arguing that “an international agreement on clear and verifiable red lines is necessary.” The experts warned of risks like engineered pandemics, mass unemployment, and loss of human control over AI, urging an enforceable global agreement. The statement is timed for the UN General Assembly, you can read more here.
AI leaders weigh in on new H-1B visa fees. Nvidia CEO Jensen Huang and OpenAI CEO Sam Altman both shared their thoughts on Trump’s new $100K H-1B fee after the sudden change sparked a wave of panic in Silicon Valley over the weekend. The AI heavyweights signaled their support for the Trump administration’s hike of the visa fee during an interview with CNBC. Huang said he was “glad to see President Trump making the moves he’s making” while Altman said financial incentives and streamlining the process “seems good to me.” The move could reshape hiring in the U.S. tech sector, especially within the already-strained AI talent pool, which relies heavily on highly skilled visa holders, particularly from India and China. You can read more here.
EYE ON AI RESEARCH
Google DeepMind lasers in on new AI risks. DeepMind released version 3.0 of its Frontier Safety Framework on Monday. The update introduces a new Critical Capability Level (CCL) focused on “harmful manipulation,” which the company defined as “AI models with powerful manipulative capabilities that could be misused to systematically and substantially change beliefs and behaviors in identified high stakes contexts over the course of interactions with the model, reasonably resulting in additional expected harm at severe scale.” The company also expanded its Framework to address the potential risk of misaligned AI models that resist being shut down by humans. DeepMind also cited new misalignment risks stemming from a model’s potential for undirected action at higher capability levels. To address these risks, the company says it is running new evaluations, including human participant studies. You can read more from Axios here.
AI CALENDAR
Oct. 6-10: World AI Week, Amsterdam
Oct. 21-22: TedAI San Francisco.
Nov. 10-13: Web Summit, Lisbon.
Nov. 26-27: World AI Congress, London.
Dec. 2-7: NeurIPS, San Diego
Dec. 8-9: Fortune Brainstorm AI San Francisco. Apply to attend here.
BRAIN FOOD
Should AI really be used for therapy? An increasing number of people are turning to AI chatbots for mental health support. It’s easy to see why, nearly 1 in 4 adults with mental illness in the U.S. report having their treatment needs unmet, often due to cost, stigma, or lack of access. However, the practice is coming under increased scrutiny from regulators after several cases of fatal incidents linked to people relying on AI bots during severe mental health struggles. In one case, the mother of a 29 year old woman who took her own life wrote in The New York Times that her daughter was leaning on OpenAI’s ChatGPT for psychological support, but the advice given by the bot was inadequate for someone in her advanced state of depression. The parents of two teenagers have also blamed AI chatbots for encouraging their children to end their lives. The American Psychological Association has called the use of generic AI chatbots for mental health support a “dangerous trend” and urged federal regulators to implement safeguards against AI chatbots posing as therapists but regulators are scrambling to keep pace with the tech. You can read more on how this is affecting young people in particular here.