Skip to main content

Command Palette

Search for a command to run...

Does Artificial Intelligence Threaten Critical Thinking?

Updated
5 min read
Does Artificial Intelligence Threaten Critical Thinking?
A

I'm Software Engineer and Technical Author with over 20 years of experience in software design and implementation. Throughout my career, I've had the opportunity to use a variety of programming languages and technologies on many different projects. In the last few years, I've been focusing on simplifying the developer experience with Identity and related topics as a Developer Advocate at Auth0 by Okta.

A phrase that resonates more and more frequently in public debate is: "Artificial intelligence will atrophy our minds, render critical thinking obsolete, and transform us into passive consumers of prepackaged content." It's a dystopian future where we will stop asking questions because we are overwhelmed by immediate and seemingly perfect answers.

But what if this narrative, as seductive as it is in its apocalyptic simplicity, is wrong? What if AI could paradoxically turn out to be not the death of critical thinking but rather its unexpected training ground?

What Is "Critical Thinking"?

Perhaps before celebrating the funeral of something, we should ask what it is exactly. Critical thinking is not simply being skeptical or saying "no" to everything. Rather, it is an active and disciplined process of conceptualizing, analyzing, synthesizing, and evaluating information. In simple terms, critical thinking is the ability to not accept every piece of information we receive at face value, but rather to examine all its facets first. In a sense, it is the opposite of blind faith. It's as if someone hands us a glass of water, and before we drink it, we meticulously analyze the glass, the water, its temperature, and even the hand that offers it.

The great fear is that with oracles like ChatGPT, Gemini, and friends at our disposal, we will stop bothering to verify information. Why analyze sources when the answer is already packaged and ready? However, a question arises spontaneously: Before AI, were we all really such tireless detectives of the truth?

The Weight of the Source

Let's be honest. How many times have we accepted a thesis not because it is valid, but because the source is authoritative? This is an ancient and powerful psychological mechanism—a mental shortcut that saves us effort. Sometimes it's a necessity because we lack the skills to validate the thesis, so we trust the author. However, sometimes we suspend our critical judgment and rely on the comfort of authority.

In the Middle Ages, knowledge was dominated by ipse dixit ("he himself said it"). If Aristotle said something, it was law. There was no need to verify it. It was enough to quote the philosopher to end any dispute. If this seems like an attitude relegated to a dark and distant era, consider the present. Turn on the TV, and a famous actor in a white coat will explain why a certain toothpaste is the best. What does he know about oral hygiene? Probably no more than we do. Yet his famous face and media "authority" are enough to convince millions of people. It's ipse dixit in the form of a commercial.

This doesn't just happen in advertising, though. World-renowned scientists who speak on topics outside their area of expertise are listened to with the same reverence as sacred texts. Not to mention politicians, especially in this historical period. The author, or "who," often matters more than the "what." Human beings tend to trust labels by nature. It's a cognitive bias and a judgment heuristic. If a book is by a Nobel laureate, we assume it's brilliant. Conversely, if an article is by an unknown author, we view it with suspicion.

AI and the Death of the Author

In his short story "Pierre Menard, Author of the Quixote", Argentine writer Jorge Luis Borges imagined a man who rewrote Cervantes' masterpiece, word for word. The text was identical, but the author was different. For Borges, that changed everything. Reading Don Quixote and thinking it was written in the 20th century by a French intellectual gives the work completely new meaning.

AI takes this thought experiment to its extreme conclusion. As David J. Gunkel pointed out in his article "AI Signals the Death of the Author," content generated by AI does not have an author in the traditional sense. There is no famous name on the cover and no biography to confirm or deny our expectations.

When faced with AI-generated content, it's just us and the content.

Without the convenient support of authority, what do we have left? Only our intellect. We can no longer say, " "I accept it because X wrote it." We must engage with the substance. Does the text hold up? Is the argument sound? Are the cited sources reliable? Does the reasoning make sense? At least, this is what one would expect.

By stripping the content of its author, AI should force us to do what we should have always done: think for ourselves. It should compel us to become active readers instead of passive believers. We cannot afford blind faith because there is no one to place it in. The oracle is anonymous and faceless. Its words are not sacred; they are only a starting point.

AI Is Just a Tool

Of course, AI can be used to flood the world with credible, well-written disinformation. It can make the mind lazy by encouraging the shortest path: copy and paste without understanding. However, this is not a flaw of the tool, but rather a choice of the user.

A knife can be used to cut bread or to kill someone. Fire can warm and cook food or reduce a forest to ashes. Every technology is an extension of our intentions. Artificial Intelligence is perhaps one of the most powerful tools we have ever created. As such, it amplifies both our wisdom and our foolishness, our curiosity and our laziness.

Rather than fearing that AI will destroy our critical thinking, we should ask ourselves if we are willing to use it to enhance it. In a world where anyone can generate a plausible text on any subject, the ability to discern, verify, and analyze is no longer just an academic skill; it's a true superpower.

Maybe the real threat has never been the machine but our constant temptation to stop thinking. Maybe the problem is that we haven't made enough effort to educate people on critical thinking. As a society, we have always favored the opposite, creating myths and encouraging conformity of thought. Maybe it's precisely the machine that will finally force us to take critical thinking seriously by depriving us of our beloved labels and reassuring ipse dixit.

At least, that is what I hope for.