In Defense of Critical Thinking in the Age of AI

Posted by CSTA Responsible AI Fellow on April 24, 2026
Artificial IntelligenceCSTA FellowshipsOpinion
In Defense of Critical Thinking in the Age of AI

I want to start with a confession: I use AI tools. Every day. In my classroom, in my lesson planning, in my professional work. So when I say we need to defend critical thinking against AI, I’m not talking about banning the tools. I’m talking about something more urgent — and more personal.

Responsible AI use isn’t about controlling the tool. It’s about protecting the thinker.

That distinction matters. Because right now, in schools across the country, the loudest AI conversation is about access — how to use it, how to block it, how to detect it. We’re spending enormous energy on the tool, and not nearly enough on the human on the other side of the prompt.

What the Research Is Telling Us

The signals have been building for a while. A 2025 study from Microsoft Research and Carnegie Mellon University found that overreliance on AI tools can diminish critical engagement and independent problem-solving. Those who most trusted the accuracy of AI assistants thought less critically about those tools’ conclusions — and without routinely keeping that thought process active, cognitive abilities can deteriorate over time.

MIT Media Lab’s preliminary 2025 research, Your Brain on ChatGPT, used EEG measurements to track brain activity during essay writing and found that students who relied on ChatGPT showed significantly weaker neural connectivity and lower memory recall than those who wrote independently. Over four months, ChatGPT users consistently underperformed at neural, linguistic, and behavioral levels. While the findings are pre-peer review and should be treated as preliminary, they raise important questions about what happens to learning when the struggle is outsourced.

A national survey summarized by Education Week found that teachers are genuinely worried. A College Board survey found 87% of principals say AI could make it less likely students develop critical thinking skills. A RAND survey found 67% of students themselves believe AI harms their critical thinking. Teachers see students using AI in ways that skip the struggle — the productive mess of real inquiry — in favor of fast outputs they didn’t wrestle with and can’t defend.

The tools aren’t the problem. The habit they can create is.

A CS Problem, Not Just an Ethics Problem

Here’s something I think the CS education community is uniquely positioned to say: this is a computer science problem before it’s an ethics problem.

Tradeoffs are native to our discipline. Every time we teach students to evaluate an algorithm, to consider what gets optimized and what gets left behind, to ask who benefits and who doesn’t — we’re already doing this work. The CSTA AI4K12 priorities name it directly. Societal impact, ethical evaluation, and the examination of tradeoffs tied to human agency — privacy, autonomy, intellectual property, creativity — are woven across every grade band from K–2 through 12th grade. We don’t need to borrow this content from environmental science or philosophy. It belongs to us.

That framing matters for teachers who feel uncertain about entering the AI ethics conversation. You already have permission. The framework already invites you in.

What’s Actually at Stake

Let me be direct about what we risk losing if we don’t get intentional about this.

When a student types a question and accepts the first answer, they’ve skipped problem formation — arguably the most important intellectual move in any discipline. When a student submits AI-generated prose without wrestling with it, they’ve lost the revision cycle where thinking actually deepens. When we let AI perform the reflection for students instead of generating questions for students to answer themselves, we’ve handed over the very thing we’re trying to build.

Washington State’s OSPI (Office of Superintendent of Public Instruction) put it plainly in their classroom guidance: AI should always start with human inquiry and always end with human reflection, human insight, and human empowerment. Their H→AI→H framework isn’t just a policy position — it’s a pedagogical commitment. The human goes in first. The human comes out last. AI is the middle, not the spine.

That’s the right instinct. The question is how to make it concrete in classrooms where the pressure to use AI is constant, and the pressure to protect thinking is easy to defer.

Prompting as Pedagogy

One answer I’ve been developing — with middle school students and with teachers in professional development — is reframing prompt engineering as a critical thinking practice, not a productivity hack.

When students learn to construct a strong prompt using intentional components — a clear task, a chosen format, a deliberate voice, and specific context — they aren’t just talking to a machine more effectively. They are doing something deeper: identifying their cognitive move, naming what they need, and holding the wheel while AI assists. The four-part structure isn’t a trick. It’s a discipline. And it mirrors the same habits of mind that the liberal arts have always valued: problem formation, communication, and genuine inquiry.

The goal is simple: better questions equal better results — and better thinking happens before the first prompt is ever sent.

What keeps AI in its proper place is structure. What keeps the student in the driver’s seat is intention. And what makes the difference in a classroom is whether we treat prompting as a shortcut or a skill.

What Teachers Can Do Right Now

You don’t have to redesign everything. A few concrete moves go a long way.

First, require students to document their AI use — not just what it produced, but how they directed it and what they changed. Transparency reinforces ownership. Second, shift what you grade. If AI can generate polished text, then the polished text isn’t the evidence of learning anymore. Grade the problem formation, the iteration, the ability to critique and evaluate the output. Third, build in reflection that AI doesn’t do for them. Ask students to trace their thinking, identify what surprised them, reconcile what the AI got wrong. That’s metacognition. That’s the single best buffer against over-reliance.

The CSTA AI4K12 framework gives us the progression. The research gives us the urgency. The classroom gives us the opportunity.

The Defense

I called this piece “In Defense of Critical Thinking” because I think it needs defending — not from AI, but from the well-intentioned pressure to adopt AI so quickly and so completely that we forget what we’re trying to build in students in the first place.

We’re not trying to build efficient prompt-senders. We’re trying to build people who can think, evaluate, create, and push back. AI is a powerful tool in that project. But it only enhances learning if the human stays at the center — asking the first question, making the last decision, and owning everything in between.

That’s not a limitation on AI. That’s a commitment to the people it’s supposed to serve.

About the Author

Tim Swick Headshot

Tim Swick is a K–2 CS teacher and Responsible AI Fellow working at the intersection of CS education, AI literacy, and professional development.