Why is it important to verify content created by generative AI?
Generative AI, trained on data from various sources (ranging from scientific articles to books, websites, forums, and social media), does not always guarantee high-quality results. It can perpetuate errors, reinforce harmful stereotypes and biases, and even generate discriminatory content due to the non-representative nature of the training data (ITI, 2024; Heaven, 2023).
Hallucinations, or incorrect or misleading outputs generated by AI models, are a direct consequence of how they function. Language models are based on the statistical probability of word sequences, calculated from training data. This process significantly differs from traditional searches for answers in reliable sources. Simultaneously, as generative AI models advance, these hallucinations are becoming harder to detect, which diminishes users' alertness and trust (Heaven, 2024). Therefore, in the context of the academic community's responsibility for the materials it promotes and publishes, verifying the accuracy of AI-generated content is essential.
References:
Information Technology Industry Council. (2024). Authenticating AI-Generated Content. Exploring Risks, Techniques & Policy Recommendations. link
Heaven, W. D. (2023). These six questions will dictate the future of generative AI. MIT Technology Review. link
Heaven, W. D. (2024). Why does AI hallucinate? MIT Technology Review. link