Remember when calculators were the big debate in school? Some teachers swore they would ruin our ability to do math in our heads. Then came computers—critics worried they’d turn us into passive thinkers. And now, AI is here, and we’re asking the same question: Is this making us smarter, or are we just outsourcing our intelligence?
It’s a valid concern. We live in an era where technology can complete our sentences, solve complex equations, and even make creative decisions. But does that mean we’re evolving into higher-level thinkers, or are we slowly losing our cognitive edge?
The Case for Technology Making Us Dumber
Nicholas Carr, in his famous essay Is Google Making Us Stupid?, argued that the internet is rewiring our brains, shortening our attention spans, and making deep thinking harder. And it’s not just Google—smart technology in general may be encouraging us to skim instead of reflect, react instead of reason.
A study by Microsoft in 2025 surveyed 319 knowledge workers and found something fascinating: the more confident people were in AI tools, the less effort they put into critical thinking. It’s like AI became the smartest kid in class, so we stopped trying to solve the problem ourselves. The study also found that individuals who had more confidence in their own abilities tended to use AI more critically, engaging with it rather than simply accepting its output. This suggests that AI doesn’t inherently make us dumber—it depends on how we interact with it.
Cognitive Offloading: Are We Storing Less in Our Minds?
One concern raised about AI is cognitive offloading—the idea that when we rely on external tools to store and process information, we weaken our own mental abilities. If AI constantly provides answers for us, do we lose the ability to think deeply, analyze, and remember key information ourselves? This is the same worry that once surrounded writing, yet humans adapted. The key is to strike a balance between using AI as an aid and ensuring we still engage in critical thought.
When Calculators First Shocked the World
Believe it or not, the first commercial electronic calculator, the ANITA Mark VIII, was advertised in 1961. Before that, mechanical adding machines ruled the office. When calculators hit the market, educators and professionals feared they’d make people lazy, just like today’s AI debates. Some schools even banned them, worried that students would forget how to do math on their own.
Yet, over time, calculators became essential tools, not crutches. They didn’t replace mathematical thinking—they freed up mental space for more complex problem-solving. Sound familiar? It’s the same conversation we’re having about AI today.
We’ve come a long way since 1975, when a newspaper in Midland, Texas, featured an advertisement about a personal pocket computer wizard that had the broad mathematical abilities of a slide rule: a Sharp calculator. At the time, this was revolutionary, and while some skeptics feared the decline of mental arithmetic, others embraced it as a leap forward in efficiency and accuracy.
Ease vs. Excellence: Are We Choosing Convenience Over Mastery?
AI makes life easier, but does it make us better? The temptation to prioritize ease over excellence is real—why struggle with complex tasks when AI can do them for us? But mastery comes through effort. If we rely on AI to generate ideas, compose content, or make decisions without questioning its outputs, we risk losing the very skills that define expertise. The best approach is to use AI as a collaborator rather than a replacement for our own effort and knowledge.
But Hold On—Didn’t Writing Once Have the Same Criticism?
This isn’t the first time humans have worried about technology dulling our brains. Socrates famously opposed writing because he believed it would weaken our memory. (Ironically, we only know this because someone wrote it down.) The same argument was made about calculators in math class and computers in general.
Yet, here we are—more advanced than ever. Writing didn’t destroy memory; it expanded our ability to share and build knowledge. Calculators didn’t kill math; they let us focus on complex problem-solving instead of basic arithmetic.
The Functionalist Temptation: Should AI Think for Us?
One of the biggest risks AI presents is the functionalist temptation—the idea that as long as AI produces functional results, we don't need to understand the underlying processes. This can be dangerous, as blindly trusting AI-generated answers without critically evaluating them can lead to errors, biases, or misinformation. Instead of treating AI as an unquestionable authority, we should use it as a tool for deeper exploration and validation of ideas.
The AI Balance: Tool or Crutch?
The problem isn’t AI itself; it’s how we use it. If we treat it as a crutch—leaning on it for every decision and never questioning its output—it absolutely has the potential to make us lazier thinkers. But if we use AI as a tool—to challenge our ideas, enhance our creativity, and push us to think differently—then it can be a powerful force for intellectual growth.
So, what’s the verdict? Are we getting smarter, or just better at delegating our thinking? The answer probably depends on how much we’re still willing to engage, question, and, well… think for ourselves.
What do you think? Is AI sharpening our minds or making us passive?