With the emergence of advanced AI systems, the way social science research is conducted could change. Social sciences have historically relied on traditional research methods to gain a better understanding of individuals, groups, cultures and their dynamics.
Large language models are becoming increasingly capable of imitating human-like responses. As my colleagues and I describe in a recent Science article, this opens up opportunities to test theories on a larger scale and with much greater speed.
But our article also raises questions about how AI can be harnessed for social science research while also ensuring transparency and replicability.
Using AI in research
There are a number of possible ways AI could be used in social science research. First, unlike human researchers, AI systems can work non-stop, providing real-time interpretations of our fast-paced, global society.
AI could act as a research assistant by processing enormous volumes of human conversations from the internet and offering insights into societal trends and human behaviour.
Another possibility could be using AI as actors in social experiments. A sociologist could use large language models to simulate social interactions between people to explore how specific characteristics, like political leanings, ethnic background or gender influence subsequent interactions.
Most provocatively, large language models could serve as substitutes for human participants in the initial phase of data collection.
For example, a social scientist could use AI to test ideas for interventions to improve decision-making. This is how it would work: first, scientists would ask AI to simulate a target population group. Next, scientists would examine how a participant from this group would react in a decision-making scenario. Scientists would then use insights from the simulation to test the most promising interventions.
Obstacles lie ahead
While the potential for a fundamental shift in social science research is profound, so are the obstacles that lie ahead.
First, the narrative about existential threats from AI could pose as an obstacle. Some experts are warning that AI has the potential to bring about a dystopian future, like the infamous Skynet from the Terminator franchise where sentient machines result in humanity’s downfall.
These warnings might be somewhat misguided, or at the very least, premature. Historically, experts have shown a poor track-record when it comes to predicting societal change.
Present-day AI is not sentient; it’s an intricate mathematical model trained to recognize patterns in data and make predictions. Despite the human-like appearance of responses from models such as ChatGPT, these large language models are not human stand-ins.
Large language models are trained on a vast number of cultural products including books, social media texts and YouTube replies. At best, they represent our collective wisdom rather than being an intelligent individual agent.
The immediate risks posed by AI are less about sentient rebellion and more about mundane issues that are nonetheless significant.