Legend has it that William Tell shot an apple from his young son’s head. While there are many interpretations of the tale, from the perspective of the theory of technology, a few are especially salient.
First, Tell was an expert marksman. Second, he knew his bow was reliable but understood it was just a tool with no independent agency. Third, Tell chose the target.
What does all this have to do with artificial intelligence? Metaphorically, AI (think large language models or LLMs, such as ChatGPT) can be thought of as a bow, the user is the archer, and the apple represents the user’s goal. Viewed this way, it’s easier to work out how AI can be used effectively in the workplace.
To that end, it’s helpful to consider what is known about the limitations of AI before working out where it can – and can’t – help with efficiency and productivity.
First, LLMs tend to create outcomes that are not tethered in reality. A recent study showed that as much as 60% of their answers can be incorrect. Premium versions even incorrectly answer questions more confidently than their free counterparts.
Second, some LLMs are closed systems – that is, they do not update their “beliefs”. In a mutable world that is constantly changing, the static nature of such LLMs can be misleading. In this sense, they drift away from reality and may not be reliable.
What’s more, there is some evidence that interactions with users lead to a degradation in performance. For example, researchers have found that LLMs become more covertly racist over time. Consequently, their output is not predictable.
Third, LLMs have no goals and are not capable of independently discovering the world. They are, at best, just tools to which a user can outsource their exploration of the world.
Finally, LLMs do not – to borrow a term from the 1960s sci-fi novel Stranger in a Strange Land – “grok” (understand) the world they are embedded in. They are far more like jabbering parrots that give the impression of being smart.
Think of the ability of LLMs to mine data and consider statistical associations between words, which they use to mimic human speech. The AI does not know what statistical association between words mean. It does not know that the crowing of the rooster does not lead to a sunrise, for example.
Of course, an LLM’s ability to mimic speech is impressive. But the ability to mimic something does not mean it has the attributes of the original.