delab photo

Is it worth using AI detectors?

With the rise in popularity of large language models, AI tools designed to detect their use, such as Copyleaks, QuillBot, Scribbr AI detector, have emerged, along with the Smodintool for the Polish language. Simultaneously, tools aimed at transforming AI-generated content to make it resemble human work more closely, such as Undetectable AI , Humbot and Humanize AI, have appeared. The presence of these tools can significantly undermine the reliability of results provided by AI detectors.
Different AI detectors often produce conflicting results, generating both false positives (erroneously identifying AI-generated content) and false negatives (failing to detect actual AI use) (see, for example, Bellini et al. 2024). For instance, one detector concluded that the United States Constitution was most likely written by AI (Edwards, 2023). The QuillBot detector warns users not to rely solely on their detector's results, especially when it might affect someone's career or academic standing. It is also crucial to note that AI detectors are more prone to incorrectly classify texts written in languages other than English (Liang et al. 2023).
If you decide to use AI detectors, it's advisable to employ several different tools and compare the results. For cases involving student submissions, such as assignments where the use of AI is not allowed, before making a final decision on the authorship of the text, it is recommended to have a thorough discussion with the author.

return to best practices


info about the study

References:

Bellini, V., Semeraro, F., Montomoli, J., Cascella, M., & Bignami, E. (2024). Between human and AI: assessing the reliability of AI text detection tools. Current Medical Research and Opinion, 40(3), 353–358. DOI
Edwards, B., & Edwards, B. (2024, May 9). Why AI writing detectors don’t work. Ars Technica. DOI
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023, April 6). GPT detectors are biased against non-native English writers. arXiv.org. DOI