top of page
Writer's pictureTom Garside

Is AI changing how we need to assess student writing?


In a recent article in the EL Gazette, a form of AI bias was revealed, showing that second-language writers may be unfairly penalised as having used AI to generate essays, when in fact the writing was produced by the students themselves.


This claim was made on the back of research from Stanford University, which compared writing from 8th-grade essays written by first-language English speakers, and essays written by Chinese students for their TOEFL exam preparation, taken from an online forum.


The findings reveal a worrying trend for teachers and assessors of second-language English writing, and also makes an indirect comment on the type of writing which is typically produced by second-language English speakers, especially in formal test situations for exams such as TOEFL, TOEIC and IELTS.


Neither students nor their essays come off a production line, so student writing should not read like it has been mass-produced or probability-generated.


  1. The ‘plagiarism’ risk

There is increasing concern about plagiarism, which comes from the ease of prompting AI generated texts to match a writing prompt, and students submitting this as their own work. There are AI detectors out there, but as the study shows, a simple secondary prompt to ‘elevate the text by using literary / technical language’, making the AI generate a more complex, more sophisticated and less detectable text.


This finding shows that a standard generated AI text mirrors many of the features of second-language student writing. AI works by harvesting common sequences of words from stretches of text from text found on the internet, and produced by its programmers up to the year 2021. It makes predictions to produce the most likely next word or phrase to come in a specific position in a text - if it is told to produce an academic essay, it will produce commonly used phrases and structures found in other academic essays.


With simple prompting, this will produce a kind of ‘lowest common denominator’ text, fitting the defined criteria for an essay, but with predictable and simple chains of reasoning and few elements of style or individuality, or any interesting or unusual line of argumentation. Basically, it follows a simple formula to produce an essay that fits the prompt, but nothing more.


2) The risk of formulaic student writing


If AI essays mirror student writing, then it holds true that second-language English writing to some extent follows the same formula. If AI detectors read student essays as AI-generated, then that shows that a lot of student writing is as formulaic as an AI-generated text. If essays like this are acceptable as responses to assessed writing tasks, then it also follows that the writing prompts themselves to some extent limit the expression and interesting points of view that make an essay stand out (to a human reader, at any rate).

Essay writing can follow a formula, and there are formulas which students can use to write effective essays, but to avoid falling foul of the AI detector’s unintentional bias, students need to be writing more individual and more interesting essays. This is actually a push in the right direction for writing teaching. Teachers will need to find more creative and personalised prompts, based on less traditional and less formulaic topics, in order for their students to raise the sophistication of their writing.


This is not to say that all successful writing has to be produced to a higher level of competence - students at university entrance competency should still have their writing appreciated for what it is: a developing style which has the potential to develop further for academic success. My argument is that lower-level students can also write more interesting and less formulaic content. It may not be entirely accurate, but by avoiding formulaic structures and language, they can show more of themselves in their writing, and write more precisely for themselves, whatever the genre.


3) The future of written assessment prompts


A second implication of this issue washes on to the writing prompts which are set in formal language assessments. The time is coming where there will be no benefit to training students to write essays using off-the-shelf structures and paragraphing techniques (as is currently done in IELTS and TOEFL test prep centres around the world).

Writing prompts will need to take more holistic and personal aspects of communication into account. Skills which need to be tested so that students can ‘enhance the perplexity or sophistication of a text’ themselves, rather than having to prompt an AI model to do the same, include:

  • Integrating reading or listening (or both) into writing

  • Text evaluation, where students talk about the effectiveness of specific texts for their stated purpose

  • Relating information to students’ own experience and prompting for resulting opinions

  • Text synthesis, combining ideas from several texts and producing a written argument or critical piece

All of these types of writing prompt mirror the reading, writing and combinations of skills which are necessary for success in academic and workplace settings, so the skills developed in preparation for the exam also develop skills for students’ individual futures, not just the assessment situation.

Progressive examinations such as the Trinity Integrated Skills in English are approaching this level of criticality and individualisation, and I wonder how second-language responses written to these exam prompts would fare when compared with first-language English speakers’ writing - how far would an AI detector be able to measure similarity from writing written to these types of prompts, rather than the existing argument, opinion and compare/contrast questions that we are familiar with today?


Tom Garside is Director of Language Point Teacher Education. Language Point delivers the internationally recognised RQF level 5 Trinity CertTESOL in an entirely online mode of study, and level 6 Trinity College Certificate for Practising Teachers, a contextually-informed teacher development qualification with specific courses which focus on online language education or online methodology.

If you are interested to know more about these qualifications, or you want take your teaching to a new level with our teacher development courses, contact us or see our course dates and fees for details.

89 views

Comments


Commenting has been turned off.
bottom of page