The following is a guest post by Forward Future community member Mandy McLean, co-founder and CEO of ClassWaves.
Weâre living in an age of effortlessness, where emails draft themselves, presentations condense into neat summaries, code completes before weâve worked out the logic, and maps choreograph every turn and lane change. The daily frictions that once slowed us down are disappearing, one autocomplete at a time, and it feels like progressâbecause it is.
But every shortcut leaves something behind: durable knowledge is built on effort, and the evidence of that is overwhelming. Retrieval, generation, spacing, and testing, the unglamorous âdesirable difficultiesâ, are the mechanisms that make learning stick. Studies from decades of cognitive psychology show that these difficulties create stronger, more transferable memories than passive review ever could (e.g., Bjork and Bjork). When systems strip away all struggle, they donât just accelerate the task at hand; they risk short-circuiting the very processes that transform an answer into true understanding
We can already see the pattern in everyday tech. When people know information will be available later, they tend to remember where to find it rather than the thing itselfâa phenomenon psychologists call the âGoogle Effect.â Offloading knowledge isnât inherently bad; libraries and laptops did it too. But it reshapes what lives in our heads, shifting recall from substance to source, and that means we have to plan around the trade-off.Â
Navigation research tells the same story. When people rely on turn-by-turn satnav, the hippocampus and prefrontal cortex, the brainâs mapping and planning centers, show reduced activity compared with active navigation. By contrast, London taxi drivers who undergo the grueling training known as âThe Knowledgeâ, memorizing 25,000 streets, thousands of landmarks, and hundreds of routes, demonstrate enlarged posterior hippocampi. Brain scans have followed drivers through this training and found that only those who passed and earned their license showed measurable growth in grey matter in the hippocampus, direct evidence that the brain reshapes itself through repeated spatial problem-solving. Use it, or lose it isnât just a metaphor, itâs visible in brain plasticity. And again, this isnât necessarily bad. If youâre comfortable outsourcing navigation to your phone, you donât need to spend mental bandwidth memorizing routesâyouâve simply traded one kind of capacity for another.
Automation brings a third caution: the out-of-the-loop problem. In highly automated settings, from aviation to industrial control rooms, humans lose vigilance and situational awareness when theyâre no longer actively engaged. Skills atrophy, attention drifts, and when the system fails, recovery is slower and more error-prone. This isnât a new finding; decades of research show that over-reliance on automation erodes manual skill and judgment, and once youâre out of the loop itâs hard to get back in.
So what exactly do we want from AI? Speed on the grunt work? Yes. But for learning, we need structured effort and other people. You canât bolt that into a UI without breaking what makes the tool useful. Let the software stay seamless, and put the effort where it belongs: in the human practices of reflection, debate, and manual reps. The tool can capture the outcome, but the growth has to happen outside it.
Six Weeks Back, Five Capacities Down
A Gallup-Walton survey made headlines this year: teachers who use AI weekly reclaim 5.9 hours per weekânearly six weeks across a school year. For a profession long frustrated by time pressure, that was hard to ignore. They even titled it "Unlocking Six Weeks a Year With AI.â But what struck me was how this single finding on time savings became the only headline to circulate widely. I pulled together the top related news articles, and hereâs what I found:
Walton / Gallup Originals
National / Mainstream Coverage
Education Outlets
Other Coverage
Warp News â đŠâđŤ AI tools give teachers six weeks of extra time per year
Let me explain why I was struck by this. I read the full report, not just the headlines, and what stayed with me wasnât the six weeks of time saved. It was the chart tucked away on page 15.
That chart shows something much more sobering: majorities of teachers predict that if students use AI weekly, independent thinking (57%), critical thinking (52%), persistence (48%), ability to build meaningful relationships (46%), and resilience (45%) will decrease. Only a thin slice thought these capacities might increase. Meanwhile, the only area where teachers foresaw meaningfully more gains than losses was grades and we have to wonder whether thatâs because AI is artificially inflating the grades, not because students are truly building capacity.
What makes this even more jarring is how it was presented. This takeaway didnât even make the header of the chart on page 15 of the 19-page report. Instead, the section was labeled: âTeachers who use AI have a more confident outlook on AI toolsâ ability to improve student outcomes.â In other words, the report literally placed the most optimistic possible spin at the top, while the data immediately beneath it showed teachers anticipating losses in the very qualities schools exist to cultivate. Thatâs the disconnect: weâve elevated the easy-to-market headline of time saved, while burying the hard one: capacity lost.

Beyond Guardrails: Where the Real Work Happens
The instinct in Silicon Valley has been to build guardrails into the tools themselves in an attempt to restore the deeper learning, reflection, and judgment they know the technology can erode. If the risk is that AI makes things too easy, the thinking goes, then the fix must be to add friction: âstudy modesâ that withhold answers, dashboards that force justifications, nudges that slow you down. You can already see this not just in edtech experiments, but also in workplace pilots and consumer apps. But hereâs the problem: people come to tools for seamlessness. They donât want sand in the gears. And when companies try to insert âdesirable difficultiesâ into the interface, users often either work around them or abandon the feature entirely.
The deeper issue isnât that the software is too smooth. Itâs that weâre trying to outsource the parts of growth that never belonged on a screen in the first place: reflection, dialogue, perspective-taking, and the messy work of being with others. Those are not interface features; they are human practices.
In schools, the Gallup-Walton survey highlights the concern that students leaning on AI will weaken in independence, persistence, resilience, and critical thinking. In workplaces, the same pattern shows up as one-click AI suggestions flatten judgment and creativity, so the meeting ends faster while the teamâs ideas get thinner. In our personal lives, AI companions promise frictionless relationships that are always listening, always affirming, and never demanding. But if digital friends absorb all our attention, we become less practiced at the slow, imperfect, and sometimes uncomfortable process of sustaining real friendships.
And at the level of society, the stakes are higher still. Civic life depends on discourseâon being able to hear another personâs point of view, tolerate disagreement, and work through conflict toward common ground. If AI serves us only smoothed, personalized feeds or agreeable chatbots, we risk losing the practice of grappling with real differences. A feed that always validates your worldview may feel efficient, but it leaves you unprepared for the rough edges of a town hall, a jury room, or even a neighborhood meeting. The same way GPS weakens your internal map of a city, algorithmic ease can weaken our shared maps of meaning, belonging, and democratic decision-making.
That doesnât mean the tools are bad. In fact, theyâre excellent at what they do best: removing administrative burden, smoothing paperwork, accelerating production, and buying back time. But if all we do is celebrate the hours saved while ignoring the capacities lost, weâve missed the point.
The alternative is simple: let the tools stay seamless, and reinvest the time they save us in the work that can only be done with other people. In schools, that means more peer-to-peer debate and reflection. In workplaces, it means making space for structured dissent and human-led brainstorming. In personal life, it means logging off long enough to practice real friendship. And in civic life, it means deliberately engaging in forums, conversations, and disagreements where friction is the pointânot a bug, but the feature that makes progress possible.
These practices donât clutter the product, but they do strengthen the muscles that matter. They turn the AI dividend into something durable, not brittle.
Mandy McLean
Mandy McLean is the co-founder and CEO of ClassWaves, an AI-powered tool that helps teachers guide deeper student dialogue in real time. She spent over a decade studying how people learn, earning her PhD in education and statistics and leading research on working adult learners at Guild.
A former high school teacher, sheâs committed to building technology that strengthens human connection in classrooms. Mandy lives in Colorado with her family and loves running, spending time in the mountains, and big questions.
Relevant links
Website: classwaves.ai
LinkedIn: linkedin.com/in/mandymclean
Substack: signallearning.substack.com

