ChatGPT and the erosion of human uniqueness

03 Jul 2024
technology
Soraj Hongladarom
Professor of Philosophy, International Buddhist Studies College, Mahachulalongkornrajavidyalaya University
If all our cognitive tasks are outsourced to large language models such as ChatGPT, says Thai academic Soraj Hongladarom, mankind risks losing its uniqueness as a species.
ChatGPT logo is seen in this illustration taken on 28 September 2023. (Dado Ruvic/Illustration/Reuters)
ChatGPT logo is seen in this illustration taken on 28 September 2023. (Dado Ruvic/Illustration/Reuters)

Large language models (LLMs) are perhaps the most talked about and most visible manifestation of artificial intelligence (AI) nowadays. Despite being a recent innovation, this technology has captivated public interest in a manner unparalleled by other AI applications.

This is largely because LLMs can relate to ordinary persons in a very intimate way; through the use of language, they simulate human-like conversation, creating a sense of interaction that is on equal footing with humans. Furthermore, LLMs are capable of performing tasks that match, or sometimes even exceed, the ability of most human beings.

For example, talks by the Thai people about ChatGPT, the most popular form of LLM, are dominated by the people’s amazement at what it can do. It is as if ChatGPT (the talk is almost always about ChatGPT, and not about the other forms of LLMs) is a godsend, something that will help improve their lives in several ways. It is suddenly available, ready to be plucked down and then the people’s lives will be made immediately better.

... an ad claims that “AI” could function as a research assistant and help graduate students complete their dissertations effectively. 

ChatGPT painted as a miracle tool

This has led to much hype about what ChatGPT can do. Thailand’s leading university, Chulalongkorn, has unveiled plans to establish the “Chulalongkorn AI Institute” to keep pace with the growing technological trends. The university has also committed to integrating ChatGPT into its educational programmes so that students can utilise the power of the LLM in their studies and research.

Furthermore, the country’s social media platforms are currently flooded with advertisements promoting training sessions designed to educate the public on leveraging ChatGPT to gain a competitive edge.

For example, an ad claims that “AI” could function as a research assistant and help graduate students complete their dissertations effectively. Another ad from the same organisation claims that using AI can enhance research efficiency, particularly in writing theses and dissertations, by up to tenfold.

University students walk around Siam Square in Bangkok, Thailand, on 8 May 2024. (Lillian Suwanrumpha/AFP)

What do these ads mean? Harnessing the potential of new technological advancements has been a longstanding practice. The organisers of these training sessions understandably seek to enhance Thai learners’ proficiency of ChatGPT. However, it also shows how much Thai people (and potentially others across Asia) believe in the power of the new technology. It is noteworthy that this trust often lacks the critical scrutiny necessary to avoid being misled by hype and to develop a precise understanding of the technology’s capabilities.

Lack of true AI literacy

This underscores the importance of AI literacy. Ironically, while these training sessions aim to enhance AI literacy, they appear to overlook the limitations of ChatGPT, which could restrict the depth of understanding of AI literacy. Furthermore, seminars in Thailand that specifically address the limitations of ChatGPT are relatively rare.

... LLMs do not have any means of sensing the world by themselves. They are as blind and deaf as the desktop computer I am using right now.

This is a cause for concern. These sessions and their advertisements reflect an underlying attitude that technology is a cure-all, a panacea. However, even the most advanced and powerful technologies we have created are not panaceas. People must understand this; otherwise, they risk being misled and facing disappointment. Such experiences could lead to a loss of trust in technology, which does not bode well for the role that technology plays, and must play, in society.

Human domain: sensing the world first-hand

So what are the limitations of an LLM such as ChatGPT?

As fluent and seemingly omniscient as it is, it is limited by the fact that LLMs do not have any means of sensing the world by themselves. They are as blind and deaf as the desktop computer I am using right now. My computer does have a camera, and it can certainly emit sound through the loudspeaker, but there is a wide gap between what my computer “sees” and how it understands and makes sense of its visual data. ChatGPT, as powerful as it is, is the same.

The “knowledge” that a LLM possesses is derived from its extensive database of words. But that is not the same as seeing the world, and understanding it as we do effortlessly. The LLM can generate associations between text and images, and even create pictures from text, but that is also different from a human being’s understanding of images and our ability to describe them through words.

This mindset can leave people vulnerable to further misinformation, as they rely on others, including ChatGPT, to think for them.

AI (Artificial Intelligence) letters and a robot hand are placed on a computer motherboard in this illustration taken on 23 June 2023. (Dado Ruvic/Illustration/Reuters)

Thus, the limitation lies in the misconception that ChatGPT knows everything, leading users to believe it can perceive and comprehend the world as we humans do. But that is not the case.

Understanding a sentence generated by ChatGPT is therefore different from understanding the very same sentence uttered by a real human being. The intimacy, the closeness that we can sense in the latter is lacking in the former.

Not realising the limitations of ChatGPT can be an ethical issue, especially when one is misled by the hype or believes that ChatGPT is capable of autonomous operation without human oversight. This mindset can leave people vulnerable to further misinformation, as they rely on others, including ChatGPT, to think for them.

Keeping at arm’s length

Nonetheless, does this imply that ChatGPT is of no use? The fact that it is being used intensively all around the world provides a clear answer to this question. However, optimising its use requires that we change our attitudes toward it.

Instead of regarding it as something we can always rely on, we should keep it at arm’s length. It can help us correct our writing, translate foreign text, or summarize long passages, but we must be the final arbiter. We cannot outsource all our cognitive tasks to it; otherwise, we will lose our uniqueness as a species. Then the age of machine takeover may arrive sooner than anticipated.

Popular This Month
China’s restaurant chains may be biting off more than they can chew overseas

China’s restaurant chains may be biting off more than they can chew overseas

By Caixin Global

Why did Xi Jinping inspect the DF-26 brigade?

Why did Xi Jinping inspect the DF-26 brigade?

By Yu Zeyuan

How AstraZeneca’s China fraud was about more than greed

How AstraZeneca’s China fraud was about more than greed

By Caixin Global

Kishore Mahbubani: Who got Trump elected? The liberals did!

Kishore Mahbubani: Who got Trump elected? The liberals did!

By Kishore Mahbubani

How China will deal with Trump 2.0

How China will deal with Trump 2.0

By Zhu Feng