It is little comfort to educators that chatbots are not real . Even though the likes ofĀ ChatGPT inĀ any meaningful sense, their ability toĀ select the next words in a sequence allows them toĀ spoof just about any kind ofĀ intellectual labour with which academics might task their students: notĀ only essays but also outlining, summarising, reviewing literature, developing aĀ presentation orĀ coming up with aĀ topic orĀ lists ofĀ sources orĀ questions.
As faculty and administrators scramble to make sense of these tools and their implications for teaching, many students have taken the policy vacuum as their cue to do precisely whatever. And some scholars have given their blessing, reasoning that bot-authored text is impossible to detect and that students will need to learn to use this technology in the workplace anyway.
But chatbots are fundamentally anti-scholarly. The connections between their statements and knowledge sources are often completely obscured. Theyāre , in some fields achieving only about 20Ā per cent ācorrectnessā. And their results arenāt repeatable ā different runs produce different outputs.
Donāt even get me started on their ethical and political degeneracy. Theyāre highly profitable commercial services completely dependent on untold quantities of data gleaned from the web without permission, notification or disclosure ā including much of scholarsā own hard work (lawsuits have been ). If you require students to use them, they must submit data to a company that hasĀ . And even if you donāt, students may be giving them your course materials, such as lecture transcripts.
Āé¶¹
Chatbots also portend equity issues as some students might not be able to afford the better, more expensive versions. And some students are angry because they sense a race to the bottom ā if they donāt use the bots, they will lose, they think. Tension is stoked by conflicting messaging from different instructors. An ethical morass results.
Perhaps there are valid limited uses of the bots, and, yes, they are becoming important in some kinds of professional work. But we can train students on these with mini courses. We donāt need a free-for-all.
Āé¶¹
My approach ā developed while teaching five lecture sections since last May, involving about 500 students combined ā is, first, to talk to the students about the importance of academic integrity for both the institution and themselves. Most students never hear these arguments, but they have an intuitive sense that widespread cheating could render their diplomas meaningless since the purpose of assignments is to develop intellectual skills and knowledge: the process, not the product, is the point. IĀ liken aĀ student using ChatGPT to an athlete hiring someone to do their workout for them.
Next, I review some of the epistemic, ethical and political issues with these services. IĀ emphasise that youāre impacting the world by using them (they consume and ) and are possibly distorting or limiting your own understanding via biases created by an unaccountable for-profit company.
Then I review my stated course policy: no allowed bot use whatsoever. This might change in the future, IĀ say, but we donāt know enough about these services yet. This precautionary approach reinforces the use of that concept in environmental studies, my field. IĀ tell students that IĀ want to make them my academic integrity collaborators, upholding the quality of their own education. IĀ reinforce that most students donāt cheat.
Now the practicalities. My teaching assistants and IĀ use manual detection and automated machine detection (inĀ trial), with full awareness of the chance of false positives. When we suspect chatbot text, IĀ ask the student to provide a step-by-step description of the process they used to produce the submission. Depending on the case, IĀ say that we will be lenient or give them full amnesty if they did use a chatbot and admit it. The likelihood of false positives means you usually cannot depend on machine or manual detection alone to apply sanctions.
Āé¶¹
During my summer courses, all remote, we manually detected about 20 suspected cases out ofĀ more than 2,000 submissions. In all but one or two cases, students readily admitted chatbot use, accepted our offer to redo the assignment and expressed both remorse and appreciation for the second chance. The ensuing discussions have been occasions for learning and for forging better connections with students. It has been extra work, but so far it hasnāt been onerous.
Since IĀ first began explicitly discussing integrity and how we detect and handle cases at the beginning of this term, weāve seen no manually detected cases and only a handful of machine-detected cases that looked like false positives, out ofĀ some 200 submissions. Sure, itās possible that weāre missing cases or that students are using the bots for steps other than composing text. But IĀ believe our process is doing as much as possible to minimise bot cheating while enhancing student appreciation of and involvement in their educations.
We can use this approach in combination with some defensive measures, such as doing more assignments in the classroom, in-person presentations, and so on. But we donāt have to take extreme measures, such as abolishing essays entirely. Nor do we need to declare that thereās nothing we can do and surrender to the bots. There is a viable middle ground.
Kenneth Worthy is a lecturer, chancellorās public scholar and creative discovery fellow at the University of California, Berkeley and adjunct associate professor at Saint Maryās College of California.
Āé¶¹
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±į·”ās university and college rankings analysis
Already registered or a current subscriber?








