AI and writing: much ado about generated essays

A recent Reddit/Twitter discussion thread on artificial intelligence (AI) and academic writing recently emerged, following claims of a Reddit user to have used AI to write well-graded essays.

The Guardian picked up on this discussion with an article entitled “‘Full-on robot writing’: the artificial intelligence challenge facing universities.” The article provided background links on specific developments in AI writing and describes how universities are responding to new technological developments, noting how some institutions (this article was focused on Australia) are considering such works as plagiarism in their policy statements. It poses the question of how educators should view current developments:

“To put the argument another way, AI raises issues for the education sector that extend beyond whatever immediate measures might be taken to govern student use of such systems. One could, for instance, imagine the technology facilitating a “boring dystopia”, further degrading those aspects of the university already most eroded by corporate imperatives. Higher education has, after all, invested heavily in AI systems for grading, so that, in theory, algorithms might mark the output of other algorithms, in an infinite process in which nothing whatsoever ever gets learned.

But maybe, just maybe, the challenge of AI might encourage something else. Perhaps it might foster a conversation about what education is and, most importantly, what we want it to be. AI might spur us to recognise genuine knowledge, so that, as the university of the future embraces technology, it appreciates anew what makes us human.”

Despite all the hand-wringing, an Inside Higher Education piece written by a professor of a class (“Rhetoric and Algorithms”) outlines the results of an in-class experiment with AI tools, in which the professor encouraged undergraduate students to use as many AI tools as possible to create an essay. The professor found the overall quality of the results to be poor, but perhaps more importantly for an overarching discussion of this topic, students did not like the process using such tools:

“I asked my students to write short reflections on their AI essays’ quality and difficulty. Almost every student reported hating this assignment. They were quick to recognize that their AI-generated essays were substandard, and those used to earning top grades were loath to turn in their results. The students overwhelmingly reported that using AI required far more time than simply writing their essays the old-fashioned way would have. To get a little extra insight on the ‘writing’ process, I also asked students to hand in all the collected outputs from the AI text generation ‘pre-writing.’ The students were regularly producing 5,000 to 10,000 words (sometimes as many as 25,000 words) of outputs in order to cobble together essays that barely met the 1,800-word floor.”

The professor argues that good writers produce better AI output, noting also that can, with such assignments, be effectively used to illustrate to students about the writing submission and feedback process, with the tools providing immediate feedback to students, which motivated students could use to learn. He argues that others worried about plagiarism in their assigned essays can mitigate the risk of AI-generated work by making assignments very specific, and notes also that educators and university policymakers must take developments in this area into account:

I am deeply skeptical that even the best models will ever really allow students to produce writing that far exceeds their current ability. Effective prompt generation and revision are dependent on high-level writing skills. Even as artificial intelligence gets better, I question the extent to which novice writers will be able to direct text generators skillfully enough to produce impressive results.

I would tend to agree with this author, with the current state of technological affairs. I do wonder how current plagiarism tools would be able to track AI-written content, if it’s not in the corpus of comparative texts for a tool, and possible burdens imposed on writing instructors in determining if work is original or not.

And the more I deal with written texts, I feel more than ever that written assignments are crucial to quality higher education. The academic writing process, in my opinion, sharpens students’ skills in many areas, particularly if work is carefully reviewed by instructors with appropriate and constructive feedback. And I agree with the author of the second article, that AI tools can be helpful learning tools (I myself use AI grammar and language tools for this purpose).

I do, however, worry about a world, as alluded to in the first article, in which journalistic content is written by AI. Rather than question of the role of writing in higher education, perhaps we should question where and how AI (not just written output) interacts with the real world, perhaps skewing perceptions.

Graham, S. S. (October 24, 2022). AI-Generated Essays Are Nothing to Worry About. Inside Higher Ed. https://www.insidehighered.com/views/2022/10/24/ai-generated-essays-are-nothing-worry-about-opinion

Sparrow, J. (November 18, 2022). ‘Full-on robot writing’: the artificial intelligence challenge facing universities. The Guardian. https://www.theguardian.com/australia-news/2022/nov/19/full-on-robot-writing-the-artificial-intelligence-challenge-facing-universities

This entry was posted in Academic Integrity, Academic writing in English, Plagiarism, Science Education. Bookmark the permalink.