ChatGPT and the Question of Authentic Knowledge
A.G. Elrod is a lecturer of English at HZ University of Applied Sciences in the Netherlands.
“For the Eye altering alters all”
– William Blake, "The Mental Traveler"
Figure 1: DALL-E 2 generative AI image: “Artificial Intelligence in the style of William Blake”
Emerging Questions, Evolving Responsibilities
As an educator, the transformative influence of the latest generative AI models has captured my attention. The meteoric rise of artificial intelligence has created a sense of urgency among many, fearing they might be rendered obsolete. Fundamental questions about the very essence of our profession have become a common refrain in our conversations. Some champion the new technology. Others portend dystopian futures. Yet, irrespective of one's stance, there's a collective recognition: we're on the cusp of a paradigm shift, and the old ways are receding.
At this juncture, it's imperative to delve into the deeper issues shaping our future as educators and, consequently, the legacy we pass on to our students. We are confronting an entirely new challenge—one that I argue is more epistemological than pedagogical.
In my many conversations with peers on the subject of generative AI—inevitably focusing on platforms like OpenAI’s ChatGPT—two main sentiments emerge. The first, demonstrating concern over academic honesty, manifests as a fear that students will employ these tools to bypass the necessary rigors of scholarship, making it challenging to distinguish original work from AI-assisted “counterfeit” knowledge. On the other hand, there's a tendency to liken generative AI tools to calculators—a tool that once revolutionized math teaching and learning. By this analogy, as calculators transformed mathematics education, so too will AI redefine the way we teach language, writing, and critical thinking skills.
In this essay, which addresses these intertwined perceptions, I contend that the risk isn't merely about students' development or honesty. Instead, it's something more ominous. Moreover, I challenge the casual comparison of AI with calculators. To equate today's AI tools with calculators is akin to comparing primitive wheels to the sophisticated Hall-Effect ion thrusters destined for Mars.
A New Reality
To begin to understand the implications of these emerging technologies, we do well to address epistemology—the theory of knowledge. Epistemologists grapple with questions like: How do we know something? How do we differentiate fact from fiction? How reliable are our processes of knowledge acquisition, and where do they falter? These questions do not have easy answers, but it is crucial to strive for a benchmark—a standard for evaluating reality.
Much has been written on this topic. For our purposes, let it suffice to say that educators are in the business of knowledge. We are dedicated to curating, transmitting, evaluating, and creating knowledge. Thus, education and epistemology are inextricably linked.
Even those familiar with the study of epistemology may struggle to list the many various types of knowledge, but one term is generally recognized: "empirical." As humans, our survival has programmed us to trust our senses. It is, therefore, difficult for us to imagine a world where the evidence of our senses is inconsistent with reality. How would we navigate such a world? How would we justify knowledge or learn from experience?
When we discuss the risks of generative AI models that can produce text, images, or videos indistinguishable from reality, we are contemplating a risk deeper than mere dishonesty. Deepfake videos can fabricate incriminating evidence of a person acting out of character. ChatGPT (with the right prompts) can generate an exceptional essay in seconds, passing all plagiarism checks, which poses a risk of academic dishonesty.
Yet, to suggest that the risk is merely legal, reputational, or moral is to miss the point. The real risk is epistemic. This risk is not in the atrophying honesty or abilities of our students but in the numbing of their senses to empirical reality itself. As educators, we must discuss this risk to our already tenuous relationship with knowledge and reality.
When we witness a deepfake, like the amusing video of Arnold Schwarzenegger's image and voice superimposed on the "Draw me like one of your French girls" scene from the 1997 film “Titanic”, the initial laughter soon gives way to concern. What happens when such technology becomes commonplace, and our social media feeds are inundated with hundreds of new counterfeit representations of "reality" daily?
When a student, reporter, author, or educator uses ChatGPT uncritically to generate content, the line between authentic work and counterfeit knowledge blurs. As such output becomes common, our trust in what we see and read is diminished, challenging our faith in empirical evidence. When perception is altered, reality is altered. William Blake's verses in "The Mental Traveler" capture this unsettling shift poignantly:
The Guests are scatterd thro' the land
For the Eye altering alters all
The Senses roll themselves in fear
And the flat Earth becomes a Ball
The stars sun moon all shrink away
A desart vast without a bound
And nothing left to eat or drink
And a dark desart all around (lines 61–68)
In this age, where our senses can be so easily deceived, and the ground of reality seems to shift, educators have a pivotal role. We now contend not merely with how we teach but with what it means to know. As stewards of knowledge, it's up to us to identify these risks and lead the conversation, ensuring that amidst these technological advancements, we maintain a sincere connection to truth and knowledge, steering our course with great care and foresight–even through this altering AI.
Please check the Pilgrims f2f courses at Pilgrims website.
Please check the Pilgrims online courses at Pilgrims website
ChatGPT and the Question of Authentic Knowledge
A.G. Elrod, the Netherlands
First Person Past Simple and You
Robin Usher, Hungary