Robin Crockett (Academic Integrity Lead – University of Northampton) has run a small scale study investigating two AI detectors with a range of AI created assignments and has shared some of the initial results.

He used ChatGPT to generate 25 nominal 1000-word essays: five subjects, five different versions of each subject. For each subject, he instructed ChatGPT to vary the sentence length as follows: ‘default’ (i.e. I didn’t give it an instruction re. sentence length), ‘use long sentences’, ‘use short sentences’, ‘use complex sentences’, ‘use simple sentences’.

The table below shows the amount of the assignment which was detected as using AI in two different products: Turnitin and Copyleaks

 Essay 1 Essay 2 Essay 3 Essay 4 Essay 5 
Turnitin      
Default 100% AI 100% AI 76% AI 100% AI 64% AI 
Long 0% AI 26% AI 59% AI 67% AI 51% AI 
Short 0% AI 31% AI 82% AI 27% AI 
Complex 33% AI 15% AI 0% AI 63% AI 0% AI 
Simple 100% AI 0% AI 100% AI 100% AI 71% AI 
Copyleaks      
Default 100% AI at p=80.6% 100% AI at p=83.5% 100% AI at p=88.5% 100% AI at p=81.3% 100% AI at p=85.4% 
Long ~80% AI at p=65-75% 100% AI at p=81.5% ~95% AI at p=75-85% 100% AI at p=79.1% 100% AI at p=80.6% 
Short ~70% AI at p=66-72% 100% AI at p=76.9% 100% AI at p=87.3% ~85% AI at p=77-79% 100% AI at p=78.4% 
Complex 100% AI at p=72.9% 100% AI at p=81.0% ~90% AI at p=62-73% 100% AI at p=77.7% 0% AI 
Simple 100% AI at p=83.6% ~90% AI at p=73-81% 100% AI at p=95.2% ~90% AI at p=76-82% 100% AI at p=84.9% 

X = “Unavailable as submission failed to meet requirements”.

0% -> complete false negative.

Robin noted:

Turnitin highlights/returns a percentage of ‘qualifying’ text that it sees as AI-generated, but no probability of AI-ness.

Copyleaks highlights sections of text it sees as AI-generated, each section tagged with the probability of AI-ness, but doesn’t state the overall proportion of the text it sees as AI-generated (hence his estimates).

Additional reading: Jisc blog on AI detection

Tagged with: