Identifying Trends In How Readers Evaluate Generative AI Writing: An Engineering Case Study
Generative Artificial Intelligence has seen widespread growth in its use over the last few years, particularly in writing. This can be a very helpful tool when used judiciously, but can undermine the human author if not used carefully. In particular, errors and other “hallucinations” are a risk, as these models are generating text based on input, not necessarily thoroughly researching the topic at hand. This highlights the need for readers to be the fact checkers. While this is generally the case with any informative writing, the issue is that readers may not know that Generative AI has been used, meaning that readers may not know they need to evaluate the article or writing differently. Identifying trends in how readers process these writings could help close that gap, giving readers tools to see through any errors, “hallucinations”, or other incorrect information that may slip into writings that employ GenAI.
For this experiment, 17 participants were recruited, all of whom were engineering students. Of these 17, the first two participants had incomplete data sets due to technical difficulties, meaning 15 participants had usable data. These participants were tasked with identifying errors in four papers (two of which were written by Generative AI, two by a human author). Eye tracking was also employed to supplement this data with information such as fixations on errors.
Author(s):
Carter Storrusten | Graduate Student | Montana State University
Nadezhda Modyanova | Research Assistant Professor | Montana State University
Bernadette McCrory | Associate Professor | Montana State University
Identifying Trends In How Readers Evaluate Generative AI Writing: An Engineering Case Study
Category
Abstract Submission
Description
Primary Track: Data Analytics and Information SystemsSecondary Track: Industry Case Studies, ISE Tools and Professional Development
Primary Audience: Academician
PowerPoint
Final Paper
Pre-Recorded Video