Artificial intelligence (AI) could reportedly replace humans while collecting data for social science research, said a team of researchers from four Canadian and American universities.
An article published in the journal Science on June 15 by the researchers from the University of Waterloo, University of Toronto, Yale University and the University of Pennsylvania explains how AI, specifically large language models (LLMs), could affect their work.
“AI models can represent a vast array of human experiences and perspectives…which can help to reduce generalizability concerns in research,” Igor Grossmann, professor of psychology at Waterloo and a co-author of the article, said in a news release.
Philip Tetlock, a psychology professor at UPenn and article co-author, was reported to say that LLMs will “revolutionize human-based forecasting” in just three years.
In their article, the authors pose the question: “How can social science research practices be adapted, even reinvented, to harness the power of foundational AI? And how can this be done while ensuring transparent and replicable research?”
Having traditionally relied on methods such as questionnaires and observational studies, the social sciences, said the authors, presents a “novel” opportunity for researchers to test theories about human behaviour at a faster rate and on a much larger scale by the ability of LLMs to pore over vast amounts of text data and generate human-like responses.
LLMs could be used by scientists to test theories in a simulated environment before applying them in the real world, the article says, or gather differing perspectives on a complex policy issue and generate potential solutions.
“It won’t make sense for humans unassisted by AIs to venture probabilistic judgments in serious policy debates. I put a 90 percent chance on that,” Tetlock said. “Of course, how humans react to all of that is another matter.”
However, the authors identified one issue that LLMs often learn to exclude sociocultural biases, raising the question of whether models are correctly reflecting the populations they study.
On the basis of this it was suggested by Dawn Parker, a University of Waterloo professor and article co-author that LLMs could be open source so their algorithms, and even data, can be checked, tested or modified.
“Only by maintaining transparency and replicability can we ensure that AI-assisted social science research truly contributes to our understanding of human experience,” Parker was reported to say.
#ArtificialIntelligence; #Research; #replaceHumans; #SocialScienceResearch; #DataCollection; #CanadianAndAmericanUniversities