As advancements in artificial intelligence accelerate by the day, many professional fields face questions of the technology’s role in communications and analysis-based work. Software like OpenAI’s ChatGPT bot can impressively mimic human writing patterns to the point where it becomes difficult to discern between text generated by the machine and that produced by humans. Even many AI detection programs struggle to differentiate, turning up numerous false positives and attributing humans’ writing to artificial intelligence. Yet, despite the uncertainty and questions that remain, the role of AI continues to occupy a prominent position in many disciplines, perhaps none more so than education, where the ability to accurately gauge a student’s understanding and mastery through their written communication is essential.
Troubled by the specter of students surreptitiously turning in AI-generated assignments, it’s unsurprising that many teachers are outspoken in their critiques of AI programs. A 2023 study by the Pew Research Center found that 25% of teachers in K-12 public schools in the United States felt that AI “does more harm than benefit” when it comes to children’s education. Common concerns include students failing to complete assignments in full, reducing their opportunities to refine and demonstrate their mastery or knowledge of a topic, decreasing their engagement in the curriculum, and ultimately stymying their education.
Other critics worry about the increased workload and ethical considerations for teachers who must spend extra time and resources merely determining whether a given assignment was written by a student, in an exacerbated form of the long-time academic issue of plagiarism. Still others wonder about the long-term ramifications of investing too soon: even if schools get onboard with AI now, lingering fears about the bubble bursting persist, as has happened with many other heavily promoted technological advancements.
At the same time, the public’s uncertainty and ambivalence about AI is reflected in the remainder of the Pew study’s respondents: 32% of teachers surveyed answered that AI provides “an equal mix of benefit and harm,” while 35% (the largest group) replied that they weren’t sure about AI’s effects (6% responded that AI provides “more benefit than harm”).
Recognizing this uncertainty and the apparent staying power of AI, organizations such as the Association for Supervision and Curriculum Development have designed workshops and seminars to acclimate educators to the realities and even possible benefits of AI. These workshops cover topics like designing more intentional assignments that are “resistant” to AI: for example, rather than asking students to write a general summary of a topic (which could easily be provided by a chatbot), instructors can specify students should use examples from in-class discussions of the topic and evaluate those examples’ effectiveness (encouraging a personalization that a chatbot would struggle to replicate).
In other words, less generic summarizing and more pointed assessment: a philosophy that should promote student understanding overall even aside from concerns about AI usage. As some education reformists have argued, if an assignment can be completed satisfactorily by an AI, what does this say about the effectiveness of the assignment as a means of assessing comprehension?
Meanwhile, some schools have embraced AI more wholeheartedly, albeit in varied ways. Teachers have reported using chatbots to develop assignment prompts to reduce their workloads, while others use the software to test students’ peer review and critical thinking skills by asking them to critique AI-written materials. As impressive as AI-generated text can be, it often features logical non sequiturs and may even contain fabricated information. Many teachers see this as an opportunity to teach students to be skeptical not only of AI’s capabilities, but also of rhetorical and analytic writing in general; after all, a “human-produced” article may be just as inaccurate or illogical as one generated by a machine, and the ability to determine this oneself remains invaluable as the prevalence of misinformation increases.
For now, AI usage among middle and high school students remains a minority. A separate Pew study of US students aged 13-17 found that two-thirds of teens surveyed had heard of ChatGPT, and only 19% of those respondents reported that they had used it for schoolwork. Nevertheless, students’ opinions of the software tend to be more favorable than teachers’: among students who have heard of ChatGPT, 69% felt that it’s acceptable to use it to research new topics, 39% felt that it’s acceptable to use it to solve math problems, and 20% felt it’s acceptable to use it to write essays. These responses suggest AI usage among students may continue to increase in the future, leaving teachers to decide whether to fight the rising tide or learn to ride the current.