Tools that detect content generated by artificial intelligence should first be tested by faculty and students before being deployed, according to Tim Boltz, education and market program executive at Carahsoft.
Such tools and the generative AI technologies they are meant to police are still undergoing maturation, and a testing phase may be needed to help end users learn how to use them or decide whether to use them at all, Boltz said in a column posted on the Carahsoft website on Wednesday, where he tackled the themes and issues discussed at the latest EdTech Talks Summit.
Although upholding academic integrity is critical amid the incursion of generative AI into all industries — including education — AI detection tools are not accurate at all times, so their use and their output should be subjected to some discretion, Boltz noted.
The Carahsoft executive points out that imperfections in the technology may put vulnerable students at further disadvantage because, for example, it might mistake as AI-generated the work of students for whom English is a second language. This would undermine human-centricity and inclusivity, which Boltz describes as guiding principles for responsible and trustworthy AI.
Nevertheless, Boltz says IT teams should be ready to implement AI, change that schools must embrace to help prepare students for the future.