How academic AI detection is disproportionately targeting neurodivergent students.
Artificial intelligence (AI) is a hot topic in today’s world. From social media to the news, AI is rapidly becoming part of daily life. This is most evident in the syllabi that professors go through every semester; There are extensive regulations on AI and academic misconduct, and AI detection tools are used daily at some universities when professors are grading assignments and exams. While AI detection tools aren’t yet allowed at the University of Saskatchewan, professors still have to analyze projects to screen for AI. But what happens when these systems and methods fail at detecting what is actually AI and what isn’t?
AI is trained through machine learning, a system that takes in writing from all over the internet and then analyzes patterns. Through being fed this data, it learns facts, ideas, grammar and connects different ideas. When AI is asked a question, it goes through all the information it has been fed that corresponds to that topic and generates what it perceives to be the most accurate answer based on that data.
While AI has its limitations, its growing sophistication can make it difficult to discern if and when someone is using AI within their writing. This is where AI detectors come in. These systems use language algorithms similar to AI writing tools, such as ChatGPT, to detect if AI was likely to have been used. It also analyzes the complexity and sentence length and structure of what has been written to deduce if the writing was likely to be done by an AI system.
For example, in most writing, there is an abundance of creative language, and the subject matter is complex. AI systems tend to disregard this style in favour of more basic and factual language. This is typically done to make the language more universally understandable. Additionally, in human writing, there is likely to be at least a few grammatical or spelling errors, which AI would typically avoid.
AI will often use complex sentence structures and use grammar that, while functionally correct, isn’t often used by the general public or by students in essays. On top of this, AI often produces writing that is disjointed, non-linear and has lots of repetition.
It is important to consider that AI detection tools are still extremely new at this point in time and are still in the process of being developed. AI detectors, while having an accuracy of 84 percent using premium tools and 68 percent on the best free tools, do have margins of error. These false positives can be detrimental to students who receive them, causing mental stress on the student, strained relationships between the student and the professor, as well as possibly causing innocent students to receive penalties for misconduct.
There is a rise in neurodivergent students (mainly those with autism, attention deficit hyperactivity disorder, or dyslexia) whose writings are being flagged by these detection systems. While no single reason for their writings being detected and flagged as AI-generated has been pinpointed, it is believed that one main reason is masking. This means that neurodivergent individuals will attempt to consciously or subconsciously hide their neurodivergence by blending into society to be more accepted by others. Research suggests that this is done through observation, analysis and the mirroring of others’ behaviours.
This intake and reuse of information from others, while not misconduct, can come across as seeming AI-generated because it can be similar to the styles AI systems use when it comes to writing. There’s also the issue of repetition, disjointed writing, and non-linear thought processes, which are common in both AI and neurodivergent individuals’ writing.
The flat affect is also common in neurodivergent individuals, which can be mistaken for AI writing. This is when someone may struggle to show and express their emotions or show a lack of excitement or enthusiasm, even for things they find interesting. This is not for an absence of emotion, but is actually due to the individual’s inability to articulate their emotions outwardly. This can have the effect of making their writing feel flat or have an absence of creative expressions. This lack of emotion can be mistaken for AI systems, which give their information very factually.
Neurodivergent students often already struggle in university settings for countless reasons. From struggles in communication to executive functioning, university can be a huge adjustment for those individuals, on top of the regular stressors of being a student. Now, being accused of using AI has been added to that list of worries.
Even when professors manually check students’ assignments, there is the potential for the writing of neurodivergent students to be called out. Teachers are likely to flag anything that they deem as being abnormal in a student’s writing as AI. This, again, poses an issue for students who are neurodivergent and whose writing may be considered different from their neurotypical peers.
On top of this, neurodivergence isn’t always something that can be easily seen from an outside perspective. Masking done by many neurodivergent students means that they are able to blend into the university environment, even if their writing doesn’t. This can happen even if a student doesn’t know that they are neurodivergent. This also means that a professor may not know that the student whose writing they are reading is neurodivergent, and therefore can not take that into account when attempting to determine if the writing is AI.
Students who are neurodivergent and have been falsely accused of using AI, can reach out to the Academic Advocacy Office, which is part of the University of Saskatchewan Students’ Union. The Academic Advocacy Office also has resources which give more information on academic misconduct, as well as every student’s responsibilities regarding AI usage. Each course at the University of Saskatchewan will also include AI policies in its syllabus that will inform students about acceptable AI usage.