Have you ever used artificial intelligence (AI) in your job without double-checking the quality or accuracy of its output? If so, you wouldn’t be the only one.
Our global research shows a staggering two-thirds (66%) of employees who use AI at work have relied on AI output without evaluating it.
This can create a lot of extra work for others in identifying and correcting errors, not to mention reputational hits. Just this week, consulting firm Deloitte Australia formally apologised after a A$440,000 report prepared for the federal government had been found to contain multiple AI-generated errors.
Against this backdrop, the term “workslop” has entered the conversation. Popularised in a recent Harvard Business Review article, it refers to AI-generated content that looks good but “lacks the substance to meaningfully advance a given task”.
Beyond wasting time, workslop also corrodes collaboration and trust. But AI use doesn’t have to be this way. When applied to the right tasks, with appropriate human collaboration and oversight, AI can enhance performance. We all have a role to play in getting this right.
The rise of AI-generated ‘workslop’
According to a recent survey reported in the Harvard Business Review article, 40% of US workers have received workslop from their peers in the past month.
The survey’s research team from BetterUp Labs and Stanford Social Media Lab found on average, each instance took recipients almost two hours to resolve, which they estimated would result in US$9 million (about A$13.8 million) per year in lost productivity for a 10,000-person firm.
Those who had received workslop reported annoyance and confusion, with many perceiving the person who had sent it to them as less reliable, creative, and trustworthy. This mirrors prior findings that there can be trust penalties to using AI.