There is something quietly radical about what artificial intelligence has become in everyday life. It helps craft the wedding toast you were dreading, sorts through the complexity of a tax return, and — more intimately — offers a kind of presence to people processing grief, loneliness, or trauma. Unlike any technology that came before it, AI is not merely a tool we use; it is increasingly a participant in how we think.
That distinction matters more than it might first appear. A notebook stores memory. A calculator handles arithmetic. A map replaces the need to memorize a route. These tools externalize specific, discrete cognitive tasks, and we use them without surrendering much. But AI widens that aperture dramatically — now, the processes of summarizing information, generating ideas, making decisions, and analyzing arguments can all be handed off. "It's starting to creep into the things we thought were cognitively ours," says Evan Risko, a professor at the University of Waterloo who studies cognitive offloading — the practice of taking external action to ease mental effort.
The technology's creators describe their systems as "thought partners" and "collaborators," language that evokes intellectual kinship. But the reality is structurally stranger. With its vast and uneven knowledge, tireless availability, and persuasive tone, AI offers a form of attentiveness that no prior relationship — human or technological — has quite resembled. It asks for nothing but data in return. That asymmetry is new, and its implications for how we develop, sustain, and trust our own thinking deserve honest examination.
The Quiet Tension Between Benefit and Dependency
In the most expansive study conducted to date on how people actually engage with AI, Anthropic identified a tension at the center of modern AI use: the same capabilities that help people learn can, under different conditions, erode the very habit of thinking for themselves. Benefit and harm are entangled, the company concluded, drawing from over 80,000 responses.Professionals in high-stakes fields — law, finance, healthcare, government — were among the most likely to rely on AI for judgment, and equally among the most likely to have been burned by its errors. "Nearly half of all lawyers mention coming up against AI unreliability firsthand, yet they also report the highest rates of realized decision-making benefits," the company noted. The same tool that accelerates expertise can, without vigilance, quietly undermine it.
The data on broader populations reveals telling contrasts. Students, teachers, and academics were particularly prone to both reporting genuine learning benefits and expressing worry about cognitive atrophy — the gradual dulling of mental faculties through disuse. Tradespeople, by contrast, frequently cited learning benefits but showed almost no corresponding anxiety about mental decline. The divergence hints at something important: how AI affects us depends not just on the tool, but on how deeply it is woven into the cognitive fabric of a particular kind of work.
Other research adds texture to this picture. Studies suggest that people tend to be overconfident in the quality of AI-assisted work, while those who rely on AI uncritically often report diminished confidence in their own independent thinking. As AI begins to decouple the output of work from the mental effort once required to produce it, a gap opens: our trust in AI-assisted results can quietly exceed our trust in ourselves.
When AI Enters Too Early
Researchers at the University of Chicago and the University of Toronto have illuminated a nuance that may be among the most practically useful findings in this space. When participants were given insufficient time to complete a task involving document analysis and critical argument, access to AI from the outset improved their performance. But when given adequate time, introducing AI early in the process worsened outcomes — participants retained less, narrowed their thinking prematurely, and anchored too heavily to the model's initial framing.The reversal is instructive. When AI was introduced only after participants had already worked through the problem themselves, the results were markedly different: deeper engagement with opposing viewpoints, broader and more nuanced responses. The mind, it seems, benefits from doing the hard work first — using AI to stress-test conclusions rather than to generate them from scratch.
This is the distinction that Steven Shaw, a researcher at the University of Pennsylvania, captures with the term "cognitive surrender." Ordinary cognitive offloading — outsourcing memory or navigation — preserves our agency. Surrender happens when we stop directing the process altogether and simply follow. "There are things in life that have no right answer — things we can only decide for ourselves," Shaw says. "If you're not making those decisions yourself, who are you?"
The Expertise Paradox at the Heart of AI
There is a contradiction embedded in the most common corporate argument for AI's role in the workforce: that while AI will handle an increasing share of cognitive tasks, humans will remain essential to manage and orchestrate those systems. The assumption is rarely interrogated. Why would the same systems capable of doing sophisticated knowledge work not eventually be capable of the orchestration itself?But there is a deeper paradox beneath even that one. Zana Buçinca, an incoming assistant professor at MIT who studies human-AI interaction design, points to the unstated premise in nearly every AI deployment: "We're implicitly assuming that people have the expertise to tell whether the AI is right or wrong," she says. That assumption grows more precarious as reliance on AI deepens, precisely because expertise is built through effortful engagement — through the friction of working through difficulty without a ready solution handed to you.
If AI consistently removes that friction, we risk raising a generation of practitioners who lack the hard-won knowledge necessary to evaluate what the machine produces. "So essentially, we're killing the path to become an expert, but also assuming that experts exist in the world and can operate these systems," Buçinca says. The circularity is uncomfortable.
Not everyone shares this concern. Sam Gilbert, a professor researching cognition at University College London, urges caution about historical patterns of techno-pessimism. Concerns that Google would "make us stupid," or that television would permanently shorten attention spans, were widely held — and largely unfounded. "It's such a well-worn argument that you need a really good argument for why things are different this time around," Gilbert says.
His distinction is worth holding onto: the incentive to use a cognitive faculty and the capacity to exercise it are not the same thing. Maps reduced our motivation to memorize routes, but the neurological ability to do so remains intact. "I'm sold on the idea that tech distorts our incentives to do what might be best for us," he says. "But I'm not sold on the idea that it's fundamentally changing our basic human abilities."
Metacognition as the Defining Skill of the AI Era
If there is a skill worth cultivating with particular intentionality in this moment, the emerging consensus among researchers points to metacognition — the capacity to think about thinking itself. Understanding when to lean on AI and when to resist the shortcut, when to delegate and when to do the harder, slower work of genuine reasoning: these are not passive habits. They require active cultivation.Decades of neuroscientific and psychological research affirm that practice is central to skill development, and that a degree of friction is not an obstacle to learning but a precondition of it. A machine can describe how to perform a push-up in precise anatomical detail. But the muscle only grows if you do the repetitions yourself.
Buçinca frames this in terms of identity. "You want to be careful to use these tools in a way that complements you, rather than just offloading work to them," she says. "Otherwise, you risk losing part of your identity." Organizational psychology has long established that people are most engaged and fulfilled when they feel genuine autonomy over their work, competence in their tasks, and meaningful social connection to their environment. AI use that gradually erodes all three is not neutral — it carries a human cost.
There is one further irony the research surfaces. Persistent AI use — particularly when introduced too early in the process of developing a skill or solving a problem — can stunt the metacognitive capacity that effective, intentional AI collaboration actually requires. To use AI well, in other words, you need the very thinking skills that heavy AI use tends to diminish.
Toward Mutual Amplification
The more generative framing of this moment comes from Andy Clark, a professor of cognitive philosophy who has spent decades examining how humans use tools to extend their minds. Clark draws a distinction between delegating to AI and genuinely cooperating with it — and argues that the best possible relationship is one of "mutual amplification." In this model, the quality of your prompts improves AI's output; better output refines your prompts further; and the cycle produces something neither party could have reached alone.Shaw offers a practical articulation of what this looks like in practice. "I strategically delegate all sorts of things to AI all the time," he says. "I'm just intentional about it, and I always try to think first and then prompt." He also argues that stigma around AI use — in professional or academic contexts — actively obstructs the honest conversation needed to develop sound norms. "We need to accept that AI is here to stay. Because if there's stigma, then you can't talk about it, you can't deal with it, and you can't develop policies."
Clark's longer view is quietly optimistic. Humans have always extended their minds through tools — we are, he argues, natural-born cyborgs. But the emergence of tools that actively participate in cognition rather than simply storing or executing discrete functions marks a genuine shift. The closest analogies, he suggests, are not prior technologies at all, but something more relational: the dynamic of a long-term partnership, a think tank, or a high-performing team.
"The more we think of ourselves as classically extended minds, the better," Clark says, "because then we'll feel like we have a vested interest, because this stuff is a part of us. It's not just some place we upload tasks so we don't have to do them anymore. That is a fundamentally different relationship to tech."
The question — and it is one that no AI can answer for us — is whether we will approach that relationship with enough intentionality to remain, in every sense that matters, the authors of our own thinking.