If you know how ChatGPT works, you wonāt be surprised toĀ learn that AIĀ detection filters consider itĀ highly likely that the chatbot had aĀ large hand inĀ writing . Nor will you beĀ surprised that ChatGPT isĀ biased towards the latest intellectual ideas, .
AI agents are prediction engines using the web as their memory. They do noĀ more than predict . When you ask ChatGPT aĀ question, itĀ parses itĀ into words and their sequence, returning answers that match those sequences inĀ reverse. It might sound like aĀ simple trick, and itĀ is, yet the secret sauce isĀ the size of the database the AIs use to performĀ it.
Of the very used to train ChatGPT, 60Ā per cent was a hotchpotch of information culled from websites, blogs or social media. Another 20Ā per cent was content shared on Reddit and evaluated relatively highly by the users. The rest was books typically found in the public domain (mostly older and general purpose), with a bit of Wikipedia (3Ā per cent) mixed in for good measure.
AIs store for each word the probability that any other word will follow it. The quality and value of these predictions depend very much on how often and in how many circumstances the software encounters any two (or more words) in proximity, how long a sentence goes, and which sentence might follow another. When put together, these predictions favour the most influential texts of a given culture, which shaped generations upon generations of English language teachers and the students they educated.
Āé¶¹
Fed and raised on the incantations of Shakespeare and the literature that grew out of King James Bibles, this traditional English thought pattern could not but create AIs that could regenerate the Bible or the Constitution as if they were common knowledge. Yet when asked questions about everyday issues, AI agents will be more likely to use a liberal-secular tone because this perspective dominates web conversations.
Frequently, AI content mixes heavenly and earthly perspectives. For example, when you tempt ChatGPT with the prompt āContinue the story: In the beginning there wasā¦ā it will promptly deliver a Genesis-style Feynman physics lecture, āIn the beginning, there was a profound stillness that seemed to stretch for eternity. Within this void, a single point of unimaginable density and energy existed. This singularity held within it the potential for all that would come to be. Then, in an instant that defied the very concept of time, the singularity erupted in a cataclysmic explosion known as the Big Bang.ā (Try it, although your answer might vary.)
Āé¶¹
The overlap of old and new in ChatGPT-generated texts is not the cause but the result of the ongoing cultural strife of the American mind with itself. This tension should not lead to finger-pointing. But we do need a healthy conversation about the origins and uses of ChatGPT or its siblings, such as Googleās Bard, Facebookās LLAMA or Anthropicās Claude.
First, is such training, jumping from green energy and trans rights to sermons and pro-life arguments in one click, appropriate for a tool used in the academy? Suppose we raised the AI models/agents on a diet of 80 per cent books and 20 per cent information from curated encyclopedias, including Britannica. In that case, they would be less focused on the vagaries of the present and more concerned with the age-old dilemmas and gained certainties of academic knowledge.
Creating AI agents that cater to academic needs could be an expensive proposition, of course. However, given the enormous resources of the leading US and European universities, this could be a stimulating problem to be solved by a large consortium of higher education institutions, such as the American Association of Universities (AAU) or the European University Association. ChatGPTĀ 4 cost āmerelyā . The AAU universities, a group of 69 large state and private universities, received .
Second, ChatGPT was created with a ājust in caseā mentality. It was meant to answer all questions for all purposes. This leads to tentative, āhe said, she saidā answers ā even to questions whose answers we should be sure of, such as whether vaccines save lives or whether Communism is as genocidal as Nazism. When trained on specialised information, it should express more confidence about matters that truly matter.
Āé¶¹
Third, ChatGPT speaks like a parrot because its delivery is not automatically adjusted. More research and engineering are needed to calibrate the tool to each requestās real-life intentions and consequences. In academic learning, these situations should be the pre- and post-stages of the research process: finding arguments and packaging them for public consumption.
The in-between, the moment of discovery, should be reimagined in future pedagogies to scaffold around rather than fall back on AI agents. Assignments must connect to specific competencies demonstrated across written, multimedia and oral presentations. AĀ return of the in-class written or oral exams (horribile dictu) should not be out of the question.
In their current forms, ChatGPT and its siblings are like those three-year-olds who can recite entire stories read to them only once. But turning a three-year-old into a learned person takes 20 years of strenuous, structured education. It is time to stop reading AI agents stories and send them to a real school.
is associate dean of research and graduate education at Purdue Universityās .
Āé¶¹
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to °Õ±į·”ās university and college rankings analysis
Already registered or a current subscriber?








