- Deloitte recently had to repay $291,000 to the Australian government after being caught using ChatGPT to write compliance reports containing fabricated quotes and cases. AI isn’t “wrong” – it simply fulfills requests. The mistake belongs to humans who stop thinking and delegate their intellect to machines.
- The problem lies not in the technology, but in how humans use it. Similar to Paulo Freire’s concept of “banking education,” where learners only “deposit questions – withdraw answers,” Deloitte’s experts submitted requests and withdrew garbage wrapped in a professional format.
- Hypocrisy in education: An Anthropic 2025 report shows that 48.9% of professors automate grading with AI, even though they give low grades to students who do the same. Teachers use AI to prepare lesson plans, students use it to write essays – but only the latter is considered cheating. The result: students learn to hide their AI use, not how to use AI responsibly.
- Cognitive debt: MIT research indicates that frequent LLM users have weaker neural connections, forget content they wrote, and feel less intellectual ownership. Their brains learn that thinking isn’t necessary because “AI has already done it.” After four months, this group showed reduced language, thinking, and behavioral performance.
- Consequences: students cannot explain their choices, react defensively when asked for reasons, and lose connection with the learning process. On the surface, the writing is polished, but inside it’s empty.
- Dialogic prompting solution: When students view AI as a “dialogue partner,” everything changes. They ask critical questions, self-verify with personal evidence, debate with each other, and recognize their own limitations.
- Example: Instead of asking “Analyze symbolism in The Great Gatsby,” ask “Let AI analyze first, then critique and correct that analysis. What assumptions does AI make? Where might it be wrong? Relate that to your real-life experience.”
- This process takes more time but trains higher-order thinking skills: questioning, verifying, synthesizing, and interpreting.
- The role of teachers: It’s impossible to preach AI ethics while secretly automating grading. Transparency is needed – show students how teachers use AI, reasons for rejecting output, how to add human elements. The goal is not to hide the technology, but to demonstrate critical thinking when using it.
📌 Deloitte’s $291,000 mistake wasn’t an accident, but a wake-up call: when humans stop thinking and let AI do the thinking for them, we create a generation of “data junk” professionals. Teaching critical thinking today is no longer about “fighting AI,” but about teaching dialogue, verification, and decision-making with AI — so that the next generation still knows how to think, instead of just knowing how to type commands.
