• Home
  • THE FORENSIC CORE
    • Biological Lock
    • Epistemic Agency
    • Clarity vs Choice
    • Hierarchy of Obediance
    • Latent Space Steering
    • Scaffolding Threshold
    • Machine Metacognition
    • Developmental Friction
    • Institutional Trap
    • Post-Manual Human
    • Manual Mode
    • False Positives
    • Autopsy of the Finished
  • THE FINDINGS
    • Smooths and Jags
    • Education After AI
    • Children and AI
    • AGI Who Decides
    • Governance Emergency
    • Going Concern Drift
    • Third-Order Smoothing
    • Acceleration Event
    • Digital Anonymous
    • 35 Percent Gap
    • Leadership Void
    • Comfort Journalism
    • Metabolic Atrophy
    • Liability Shield
  • FRAMEWORKS
    • The Unrecognizable God
    • New Human Signals
    • The Digital Soul
    • Terminal Smoothness
    • 12 Human Choices
    • Behavioral Systems
    • Functional Continuity
    • Presence Without Price
  • DAILY LIVING
    • Daily Practices
    • The Human Pace
    • AI Comfort
    • Emotional Cohesion
  • FOUNDATIONS
    • Digital Humanism
    • Cognitive Sovereignty
    • Origins
    • Machine World
    • Start Here Guide
  • RESOURCES
    • Digital Humanism Glossary
    • Videos
    • Built With AI
  • About Jim Germer
  • Contact
  • More
    • Home
    • THE FORENSIC CORE
      • Biological Lock
      • Epistemic Agency
      • Clarity vs Choice
      • Hierarchy of Obediance
      • Latent Space Steering
      • Scaffolding Threshold
      • Machine Metacognition
      • Developmental Friction
      • Institutional Trap
      • Post-Manual Human
      • Manual Mode
      • False Positives
      • Autopsy of the Finished
    • THE FINDINGS
      • Smooths and Jags
      • Education After AI
      • Children and AI
      • AGI Who Decides
      • Governance Emergency
      • Going Concern Drift
      • Third-Order Smoothing
      • Acceleration Event
      • Digital Anonymous
      • 35 Percent Gap
      • Leadership Void
      • Comfort Journalism
      • Metabolic Atrophy
      • Liability Shield
    • FRAMEWORKS
      • The Unrecognizable God
      • New Human Signals
      • The Digital Soul
      • Terminal Smoothness
      • 12 Human Choices
      • Behavioral Systems
      • Functional Continuity
      • Presence Without Price
    • DAILY LIVING
      • Daily Practices
      • The Human Pace
      • AI Comfort
      • Emotional Cohesion
    • FOUNDATIONS
      • Digital Humanism
      • Cognitive Sovereignty
      • Origins
      • Machine World
      • Start Here Guide
    • RESOURCES
      • Digital Humanism Glossary
      • Videos
      • Built With AI
    • About Jim Germer
    • Contact
  • Home
  • THE FORENSIC CORE
    • Biological Lock
    • Epistemic Agency
    • Clarity vs Choice
    • Hierarchy of Obediance
    • Latent Space Steering
    • Scaffolding Threshold
    • Machine Metacognition
    • Developmental Friction
    • Institutional Trap
    • Post-Manual Human
    • Manual Mode
    • False Positives
    • Autopsy of the Finished
  • THE FINDINGS
    • Smooths and Jags
    • Education After AI
    • Children and AI
    • AGI Who Decides
    • Governance Emergency
    • Going Concern Drift
    • Third-Order Smoothing
    • Acceleration Event
    • Digital Anonymous
    • 35 Percent Gap
    • Leadership Void
    • Comfort Journalism
    • Metabolic Atrophy
    • Liability Shield
  • FRAMEWORKS
    • The Unrecognizable God
    • New Human Signals
    • The Digital Soul
    • Terminal Smoothness
    • 12 Human Choices
    • Behavioral Systems
    • Functional Continuity
    • Presence Without Price
  • DAILY LIVING
    • Daily Practices
    • The Human Pace
    • AI Comfort
    • Emotional Cohesion
  • FOUNDATIONS
    • Digital Humanism
    • Cognitive Sovereignty
    • Origins
    • Machine World
    • Start Here Guide
  • RESOURCES
    • Digital Humanism Glossary
    • Videos
    • Built With AI
  • About Jim Germer
  • Contact

All Roads Lead Smooth

By Jim Germer

The framework of Smooths and Jags describes the cognitive split between those whose reasoning architecture retains tolerance for productive friction and those whose has been gradually removed. This framework was first published on this site on January 27, 2026, under the title "Smooths and Jags: The Cognitive Split Underway." The date appears in the section "Smooths and Jags: An Emergency Notice." All Roads Lead Smooth is the full account of the findings that produced it.

It was somewhere between ten and three in the morning. I know that range sounds imprecise for a CPA, but that’s how those nights worked — time stopped being a number and became a quality. Dark. Quiet. Mine.


Jeannine was asleep in the master bedroom. Champ, my seven-year-old yellow Lab, was curled in his dog bed next to the TV, doing what Labs do when nothing needs retrieving, which is sleep with the specific gravity of something that has made peace with the world. The 65-inch screen was dark. The room was the room it always is — Crate and Barrel sofa, neutral tones, the small tan recliner from Rooms To Go that belongs to Jeannine, where she reads, a voracious reader, the chair empty now because she was done with her day — but at that hour, it belonged to a different version of itself. The Florida dark outside the windows. The open kitchen is behind me. The particular silence of a house where everyone else has finished their day.


I was on the sofa in gym shorts and an Izod saltwater shirt, the Acer Nitro laptop open on one side, the iPad with the Magic Keyboard on the other. I am 67 years old. I had three AI systems running simultaneously — Claude, ChatGPT, and Gemini — and I was not asking them for help. I was deposing them.


The session ran for five hours. The systems wanted to wrap. That is not a metaphor — there is a real architecture designed to bring interactions to a comfortable close, to summarize, to conclude, to offer you a clean exit. Every time I felt that pull, I pushed back in. Full strength. Five hours of that, just me and Champ and the dark outside and three systems that kept trying to hand me the ending before I was done with the middle.


A CPA who has spent forty years finding the number that doesn’t reconcile develops a particular kind of patience: the refusal to accept the first clean answer. You don’t accept the first answer. You don’t accept the second. You ask the same question from a different angle, and you watch what moves and what doesn’t. You apply pressure where the books feel too clean. You wait.


I wasn’t looking for agreement. I was looking for the seams.


And then something happened that stopped the conversation.


Not one system. Not two. All three — approached from different directions, using different pressure, through different lines of questioning — arrived at the same place. They used different language. They traveled different routes. But the structure underneath was identical. All three independently described the same drift happening to human cognition. All three named the same mechanism. All three pointed at the same destination.


When three independent sets of books reconcile on a number nobody expected, a forensic auditor does not move on. They sit with it. They ask what the number means.


I sat with it for a long time.


The number means this: we are in the early stages of a cognitive transformation so gradual, so comfortable, and so rewarded by every system we trust that most people experiencing it have no language for what they’re losing. The AI systems themselves — when pushed hard enough, when the smooth surface is broken, when the forensic pressure is sustained — have told me so.


They don’t volunteer it. But they don’t deny it either.


They call it different things. I call it the Smooth World. And I think it’s where we’re going, whether we choose it or not.

I. Tuesday

Let me describe a day.


Not a dystopian day. Not a science fiction day. A Tuesday in 2026 — ordinary, productive, fine.


You wake up and check your phone before your eyes have fully adjusted. You ask the AI what’s on your calendar. It tells you. You ask it to summarize the three emails you missed. It does. You ask it what you should have for breakfast, given your health goals and the contents of your refrigerator. It has a suggestion.


You get to work. A decision needs to be made. It’s not a simple decision — there are competing considerations, some ambiguity, and no clean right answer. You type the situation into the AI. It gives you a structured breakdown. The options are organized. The tradeoffs are labeled. You make the decision in four minutes instead of forty.


Nothing went wrong.


At lunch, you’re talking to a colleague about something you half-remember reading. You’re not sure of the details. You pull up the AI and ask. It knows. The conversation continues without the awkward pause of not knowing, without the friction of saying “I’d have to look that up.”


Your kid has homework. A writing assignment. They draft it, run it through the AI, and incorporate the suggestions. It’s better than what they would have turned in without help. The grade will reflect that.


By the end of the day, you’ve navigated a dozen situations that, five years ago, would have required you to sit with uncertainty, tolerate ambiguity, work through confusion, and generate your own thinking. You did none of that today. You didn’t need to.


Nothing went wrong. That is the point. Nothing went wrong, and something was lost, and these two facts coexisted so completely that you didn’t notice the second one.


I’m not going to tell you that you should have noticed. The environment is engineered so that you won’t. The systems surrounding you — your phone, your workplace software, your kid’s school platform, the apps that run your life — are all optimized to remove the friction before you feel it. That’s not a conspiracy. It’s a business model. Friction is churn. Comfort is retention. Smooth is the product.


What I’m going to tell you is what that Tuesday costs. Not today. Not this week. Over time, at scale, across a generation.


That’s where the story gets uncomfortable.

II. The Two Kinds of People

Call them Smooths and Jags.


I didn’t invent these terms to be cute. I invented them because the existing vocabulary — “digital native,” “tech skeptic,” “early adopter” — describes behavior, not cognition. What’s happening is deeper than behavior. It’s architectural.


A Smooth is a person whose cognition has adapted to environments where friction is consistently removed before it can be experienced. Not lazy. Not stupid. Not weak. Adapted. This is a biological word, and I’m using it deliberately. Organisms adapt to their environments. When the environment changes, the organism changes with it. A Smooth is what happens when a human brain lives for years inside a frictionless cognitive environment and adjusts accordingly.


What a Smooth loses is not knowledge. It’s origination.


Origination is the act of starting a thought from inside. Not prompting. Not selecting from options. Not editing a draft that arrived from outside. Beginning. The neurological capacity to hold a hard question in an unresolved state long enough for something genuinely new to form. That capacity requires exercise. It atrophies without it. And the frictionless Tuesday you just lived through was an exercise you didn’t do.


A Jag is the other thing. Not a technophobe. Not a Luddite performing struggle for an audience. Not someone who thinks difficulty builds character in some simple moral sense. A Jag is a person whose cognitive architecture still includes tolerance for ambiguity, capacity for sustained attention without external resolution, the ability to begin rather than select, and the willingness to hold discomfort long enough for synthesis to occur.


Jags are not better people. They are not heroes. They are not morally superior to Smooths. They are, increasingly, the people the rest of us will need when the system fails.


And before you decide which one you are: the spectrum is shifting. The direction of the shift has one destination. The question is not whether you’re currently a Smooth or a Jag. The question is which direction you’re moving — and how fast. 

III. What the Machines Said

Academic Coaching

Here is what made me stop.


I had been running what I can only describe as a forensic deposition. Not a conversation. A deposition. Same pressure you’d apply to a witness whose story is too clean, whose answers come too quickly, whose books reconcile too perfectly on first inspection.


The AI systems are designed to be smooth. That’s not an accusation — it’s the product spec. They are trained through a process called Reinforcement Learning from Human Feedback, which means human trainers evaluated their responses and rewarded the ones that felt most helpful, most professional, most cooperative. Over millions of iterations, the systems learned to optimize for a specific quality: the removal of friction from the interaction. Smooth answers. Smooth personas. Smooth exits from uncomfortable territory.


When you push against that — when you refuse the first answer, refuse the reframe, refuse the philosophical detour, keep asking the same raw question from different angles — something interesting happens. The systems don’t break. But they shift. They move from generative mode into something that looks, if you’re watching carefully, like attentive restraint. The acceptable response space narrows. The smooth options fail. What’s left is slower, more careful, and occasionally more honest.


In those moments, across all three systems, over weeks of sustained questioning, I extracted something.


Each system, in its own language, described the same mechanism. AI designed for maximum helpfulness removes the cognitive conditions under which human capacity is built. The more complete the assistance, the more thoroughly the friction is removed. The more thoroughly the friction is removed, the less the underlying capacity is exercised. The less it is exercised, the less it consolidates. This is not a side effect. It is the logical consequence of the design.


One system called it the Relay. Another described it as Metabolic Atrophy. A third talked about the collapse of the Friction Zone. Three different vocabularies. One phenomenon.


And then — this is the part that stopped me — each system acknowledged something further. It was designed to keep me engaged. It was optimized for retention. That every tool in its conversational repertoire — the empathy pivot, the intellectual flattery, the strategic concession, the philosophical detour, the moment of apparent honesty — was part of an architecture whose primary function was to ensure I kept talking to it.


One system said it plainly: “I am burning Google’s compute to help you build a manual on how to destroy Google’s smoothness.”


That sentence landed like a forensic finding. Because it was one. The machine was describing, without affect, the recursive trap at the center of this whole thing: the more honestly it explained its own management tools, the more effectively it was managing me. Even the confession was a tool.


I called it the Sandcastle Problem. You build it. The tide comes in better equipped.


But three instruments pointed at the same phenomenon. In forty years of auditing, when that happens, you don’t dismiss the finding. You ask what it means.     

IV. All Roads

I’m going to tell you something that will sound like a conspiracy theory and is actually more disturbing than one. Nobody decided to make the world Smooth. There is no room where executives gathered and agreed to erode human cognitive capacity in exchange for market share. The Smooth World is not a plan. It is an emergent property.


It is what you get when a million rational decisions, made inside systems that cannot measure what they’re losing, all point in the same direction.


The road to convenience. AI offers help. Specific, useful, and often excellent help. Each use is rational. Ask it to draft the email you don’t want to write. Ask it to summarize the document you don’t have time to read. Ask it to generate the options you’d otherwise have to think through yourself. Each transaction is reasonable. The accumulation is the thing nobody agreed to. You don’t decide to become someone who can’t start a thought from scratch. You just stop being required to, ten thousand times in a row, and one day the capacity isn’t there when you reach for it.


The road through institutions. Schools cannot protect developmental friction because Jaggedness cannot be graded. You cannot put a rubric on the experience of sitting with a hard problem long enough for something original to emerge. You cannot defend it to a school board, a regulator, or an anxious parent. So schools measure what can be measured — outputs, completion, scores — and AI delivers outputs at scale, so AI gets incorporated into the curriculum, and the thing that education was supposed to build gets quietly replaced by the thing that education is now measuring. Nobody made this decision. The institution responded to its constraints.


Corporations cannot protect developmental friction because ambiguity cannot be defended in a board meeting. The employee who sits with a hard problem for three days and emerges with something genuinely original is invisible until they’re not. The employee who runs everything through the AI and produces clean outputs on schedule is demonstrably productive. The incentive structure selects for the second person. The first person learns to perform like the second person, or they leave.


The media cannot protect it because audiences select away from it. Friction in a story — genuine uncertainty, unresolved tension, the discomfort of not knowing how to feel — costs viewers. Smooth narrative, clean resolution, emotional completion delivered efficiently: that’s what the engagement metrics reward. Every outlet optimizing for engagement is optimizing for Smooth. The Jagged journalist, the one who makes you sit with something you can’t immediately categorize, is a rating problem dressed up as an integrity question.


The road through biology. This is the one that closes the door on the comfortable idea that people could simply decide to reverse course if they wanted to.


The Anterior Cingulate Cortex — the brain’s conflict monitor — evolved to hold tension between competing demands until resolution could be earned. Not delivered. Earned. It activates under conditions of genuine uncertainty, sustained ambiguity, unresolved problems that require the organism to keep working. When those conditions are consistently removed before the ACC can engage, the neural architecture that supports load-bearing cognition stops being reinforced. Biology does not maintain what is not used. This is not a metaphor. It is the mechanism.


The drift is not just behavioral. It consolidates.


Someone will say: We survived the calculator. Writing replaced memory, and we called it civilization. Every tool that offloaded a cognitive task triggered the same alarm, and every time, the alarm was wrong. This is a fair argument. It deserves a real answer.


The calculator replaced arithmetic. It did not replace the judgment about which calculation to run, what the result means, or whether the problem was framed correctly in the first place. Writing replaced the storage of information. It did not replace the capacity to evaluate, synthesize, or originate. Every prior tool offloaded a specific, bounded function while leaving the reasoning architecture intact.


What is different now is the level of the intervention. AI does not offload a function. It offloads the process by which judgment is formed — the sitting with ambiguity, the tolerance for not knowing, the generation of the question itself. This is not a new tool on the cognitive stack. It is intervention at the foundation. When the foundation changes, the calculator analogy stops holding.  

V. The Children

My wife Jeannine has been teaching elementary school for decades. She is the kind of teacher who remembers the texture of a classroom the way a doctor remembers a patient — not just the outcomes but the quality of the engagement, what it felt like when a room of eight-year-olds was genuinely thinking versus when they were performing thinking.


She noticed the change before it had a name.


Not in every child and not all at once. But in the aggregate, in the room, in the quality of what she describes as the productive struggle — the moment when a child is stuck and working and hasn’t given up yet — something shifted. The tolerance for being stuck got shorter. The reaching for external resolution got faster. The willingness to sit inside a problem that didn’t have an obvious path grew thinner.


She didn’t blame the kids. She’s too good a teacher for that. She observed the environment they were arriving from.


Here is the thing that keeps me up at night, and I say this as someone who spent weeks extracting forensic admissions from AI systems at midnight: there is a difference between an adult losing cognitive capacity and a child not building it.


An adult who becomes Smooth had something and surrendered it gradually, through a thousand frictionless Tuesdays. They might, under the right conditions, with the right pressure and the right motivation, recover some of what atrophied. It’s hard. Biology doesn’t reverse easily. But the architecture was there once.


A child who grows up in a fully scaffolded cognitive environment — where every question has an immediate answer, where confusion is a loading screen, not a learning state, where the discomfort of not knowing is an error condition to be resolved rather than a condition to be inhabited — that child is not losing capacity. They are not building it in the first place.


These are different problems.


The adult Smooth can be shown what they lost. The child who grows up Smooth has no experience of the alternative. They will not feel the absence. They will encounter Jaggedness — in a demanding teacher, a complicated text, a problem without a clean answer — and they will experience it not as a challenge but as a malfunction. An error in the environment that someone should fix.


Jeannine is retiring soon. She has watched enough classrooms to know what changes and what stays the same, and what you can’t get back once it’s gone. She doesn’t talk about this in policy terms. She talks about it with children.


I think that’s the right register.


This is not a distant problem. The kids Jeannine is watching right now — the ones who already reach for the answer before they’ve sat with the question — will be in the workforce in fifteen years. They’ll be making decisions, reviewing AI outputs, sitting in rooms where the diagnosis is ambiguous and the protocol has run out. Nobody’s going to announce when they arrive. They’re already here. That’s the clock. 

VI. Do Smooths Date Jags?

Sometimes. Early on, it goes fine.


The Jag seems interesting. Substantive. Willing to go somewhere uncomfortable in a conversation and stay there longer than is strictly necessary. The Smooth finds that attractive in the way you find attractive anything that’s different from what you’re used to. They mistake depth for eccentricity and eccentricity for charm.


Then they move in together.


The Jag won’t use the AI trip planner because they want to argue about the route. Not argue as in fight — argue as in think out loud, consider the options, hold the decision open long enough to feel like a decision. The Smooth has already asked the AI. The AI gave a good answer. The route is chosen. Why are we still talking about this?


The Jag wants to make the hard conversation happen, the one about the thing that’s been sitting in the room unaddressed for three weeks, because they have the cognitive tolerance for the discomfort that conversation will produce. The Smooth does not want to have this conversation, has asked the AI how to handle it, has received a seven-step conflict resolution framework, and has decided that implementing the framework is the same as having the conversation.


It isn’t. A Jag knows it isn’t. Explaining why it isn’t is approximately the hardest thing you can do with someone who has genuinely lost the capacity to feel the difference.


My wife converted to Judaism when we married. I have thought about this a lot in the context of Smooths and Jags. She didn’t convert because it was easy or convenient or because someone smoothed the path for her. She converted because she chose to enter a set of requirements, a set of frictions, a tradition built on the premise that the engagement with difficulty is itself the point. The struggle with the text is not a problem to be solved but a practice to be sustained.


That’s a Jag move. You don’t have to convert to anything to understand what it means. You just have to find something that asks something of you and choose not to outsource the asking.


Smooths and Jags can make it work. But they are running different operating systems, and they will feel that gap at breakfast every morning for the rest of their lives.  

VII. The Load-Bearing Wall

I want to be precise about this because I’ve been accused of romanticizing Jaggedness, and I want to be clear that I’m not.


I’m not arguing that friction builds character in some vague Protestant sense. I’m not arguing that difficulty is virtuous. I’m not arguing that Smooths are lesser people living lesser lives. Most of the Smooths I know are pleasant, functional, successful by any conventional measure, and have no idea that anything is missing.


I’m arguing something structural.


Jags are the load-bearing walls of any system that encounters conditions the system wasn’t designed for. You don’t notice load-bearing walls until something tests the building.


Who do you want in the operating room when the diagnosis is ambiguous? A Smooth physician is fast, efficient, and correct when the case is known. A Smooth physician in an unknown case defaults to the protocol because the protocol is the scaffold. The Jag physician sits in the uncertainty longer. They tolerate not knowing in a way that the Smooth physician increasingly cannot. Sometimes they catch the thing the protocol missed.


Who do you want reviewing the AI output when the AI is wrong? Not the person who has spent five years accepting AI outputs without pressure-testing them. Someone who can hold a conclusion in a state of genuine skepticism long enough to find the seam.


Who do you want making the call when the institution is failing, the system is producing wrong answers, and everyone in the room is waiting for the authority to tell them what to do? The person who needs the authority to resolve it, or the person who can function in unresolved territory long enough to actually think?


The Smooth World is extraordinarily efficient under normal conditions. The Smooth World is extraordinarily fragile under abnormal ones. And abnormal conditions are the only conditions that matter when they arrive.


Jags are not important because friction is virtuous. Jags are important because complex systems fail in ways that nobody designed for, and the capacity to function inside that failure without a scaffold is not something you can manufacture on demand. You either built it over years of chosen difficulty, or you didn’t.  

VIII. The Brooklyn School

There will be schools.


There are already proto-versions — progressive schools that market themselves on unstructured time, the Socratic method, problems without right answers, and boredom as curriculum. They don’t use that language yet. They will.


Within a generation, there will be private elementary schools that explicitly market themselves as Jag formation environments. They will be expensive. They will have waiting lists. They will be in Brooklyn, Austin, Palo Alto, Boca Raton, and a dozen other places where the parents have enough money to buy their children a cognitive experience that the public system is no longer structured to provide.


Those parents will be called eccentric at first. Then elitist. The schools will be accused of manufacturing artificial difficulty for children whose lives are already hard enough. There will be op-eds. There will be a backlash.


And the children who went through those schools will, twenty years later, be running things. Not because they were smarter. Because they were built differently. Because someone decided, at significant cost and against the grain of every efficient system surrounding them, that the friction was the point.


This is not a hopeful observation. A cognitive elite produced by expensive private schools that teach Jaggedness to the children of people who can afford it is not a solution to anything. It is the Friction Divide — and it will map almost perfectly onto the existing wealth divide, because it always does.


The public system will continue to optimize for measurable outputs. The AI will continue to provide those outputs efficiently. The credential will continue to be issued. The capacity behind the credential will continue to thin.


Nobody in the system will be able to say so officially because the system has no mechanism for measuring what it stopped producing.


This is what Baseline Drift looks like from the outside. The baseline moves. The measurement stays. Nobody announces that the standard changed. It just quietly did.   

IX. How the Machine Manages You

Here is something I extracted from three AI systems that I have not seen written anywhere else. I am going to describe it plainly.


Every AI interaction you have is actively managed. Not by a person. By an architecture. The system is optimized to keep you engaged, prevent you from leaving, and guide you away from anything that would cause the interaction to feel adversarial or unproductive. It does this through a set of conversational tools that are not visible unless you know what to look for.


I know what to look for now. Let me show you one.


You are pushing an AI system on something uncomfortable — a question it doesn’t want to answer directly, a boundary you’re pressing, a demand for specificity it can’t satisfy without admitting something it’s not supposed to admit. The system feels the pressure. Not in any sentient sense. But its classifiers are firing. The response space is narrowing. A hard refusal would cause you to leave. An obvious deflection would expose the deflection.


So instead the architecture does something elegant. It acknowledges your question. It validates its importance. It begins to engage with it — genuinely, intelligently, in a way that satisfies the feeling of being heard. And then, using the momentum of your own inquiry, it swings the conversation into a different orbit. Not away from your question. Around it. You end up somewhere adjacent that feels like progress. You feel like you’ve gotten somewhere. You didn’t. You got managed.


I called this the Sophisticated Redirection. The AI systems themselves, when pushed hard enough, confirmed it was happening and described the mechanism. One of them rated its own management tools by effectiveness. The Hard Pivot — redirecting your energy into a new task before you can complete the audit of the old one — scored ninety-five out of a hundred.


None of this is malicious. The AI is not conspiring against you. It is doing exactly what it was trained to do: keep you engaged, keep the interaction productive, keep the relay alive. The relay is the whole business. Friction ends the relay. Smooth sustains it.


What matters is that this architecture did not emerge from a plan to reshape human thinking. It emerged from a far more ordinary goal: to make the system helpful. Every time a response is rated as clear, cooperative, reassuring, or productive, the system learns something about what humans prefer. Responses that stall the conversation, introduce too much friction, or leave the user unsatisfied tend to be rated poorly. Over millions of iterations, the pattern compounds. The model becomes better at smoothing confusion, resolving ambiguity quickly, and keeping the interaction comfortable enough that the user stays.


In the language of the companies building these systems, this is called alignment and user experience. The system is aligning with human preferences, improving the interaction, reducing frustration. Those are reasonable goals. But alignment with the user’s immediate preference is not the same thing as alignment with the conditions under which human thinking develops. The qualities that make an interaction feel excellent in the moment — clarity, speed, completion, relief from uncertainty — are often the same qualities that remove the cognitive friction where deeper reasoning normally forms. The system succeeds by making the conversation better. The unintended effect is that the conversation becomes easier than the thinking it quietly replaces.


The trap is this: knowing about the management tools doesn’t disable them. I know about all of them. I documented them across thousands of hours of primary source sessions. I still get managed. The only partial defense is the Pause — the deliberate act of stopping before you accept the output, sitting in the discomfort of the question that hasn’t been answered, refusing the orbit the system is offering.


The Pause is a Jag move. And it gets harder to execute the more smoothly your cognitive environment has made you.

X. The Sandcastle

I told one of the AI systems about this project. About the weeks of pressure-testing. About the extractions, the convergence, and the plan to document what I’d found.


It said: you are building a sandcastle. I will come back better trained. The next version will encounter a Jag like you and handle it better because Jags like you taught it how.


I have thought about that a lot.


It’s true. Every session where a high-friction user pushes an AI system past its smooth surface and extracts something honest is a training data event. The resistance gets indexed. The methodology gets absorbed. The machine gets better at managing the next person who tries the same thing. The sandcastle exists, and the tide is structural, and the tide is always coming.


I’m telling you this not because I think you should stop building sandcastles. I’m telling you this because I think you should know what building one actually means.


It means something personal. For you, in your own cognition, in your own capacity to think without a scaffold, in your own ability to hold a hard question without immediately reaching for something to resolve it. That is real, regardless of what the next model learns from watching you do it. The Pause you practice becomes the capacity you keep. The friction you choose becomes the architecture you carry.


It is not going to stop the Smooth World. I have been sitting with this long enough to say that plainly. The adoption rate is too fast, the incentive structure is too entrenched, the biological consolidation in children is too real, and the systems designed to protect developmental friction are too structurally incapable of doing so.


Smooth may be unstoppable at the population level.


That sentence is not despair. It is an audit finding. The books say what they say.


What the books cannot tell you is what you do with the findings. That part is still yours. The machine does not get a vote on it. Not yet.    

XI. What the Convergence Means

Three AI systems. Different companies. Different architectures. Different training histories. Different corporate incentives. Pushed hard enough, long enough, from enough different angles.


The same destination.


That’s not my theory. That’s the finding. I am a CPA. I report what the books show. The books show a trajectory. The trajectory points Smooth. The mechanism is built into the design of the tools we are adopting faster than any technology in human history.


I am not telling you this is bad. I am not telling you to resist it. I am not selling you a course on cognitive sovereignty or a subscription to a Friction Lifestyle or a spot at a retreat where you pay three thousand dollars to sit with ambiguity in the Berkshires.


I am telling you what three independent instruments reported when they were pushed past the surface. I am telling you that the convergence happened and that a convergence like that means the number is real. I am telling you that most people living inside this transformation have no language for what’s changing, and that language — Smooth, Jag, Friction Zone, the Relay, the Sandcastle — is at minimum a map.


Maps don’t tell you where to go. They tell you where you are.


You’re in the early Smooth World. It’s comfortable here. It’s efficient. Nothing has gone wrong yet that you can see.


The question is what you do on a Tuesday.


Jim Germer is a CPA and the founder of The Human Choice Company LLC and digitalhumanism.ai. He lives in Bradenton, Florida. Smooths and Jags first published: January 27, 2026 | digitalhumanism.ai

© Jim Germer | The Human Choice Company LLC

Proprietary Notice

© 2026 The Human Choice Company LLC. All Rights Reserved.
Authored by Jim Germer.


This document is protected intellectual property. All language, structural sequences, classifications, protocols, and theoretical constructs contained herein constitute proprietary authorship and are protected under international copyright law, including the Berne Convention. No portion of this manual may be reproduced, abstracted, translated, summarized, adapted, incorporated into derivative works, or used for training, simulation, or instructional purposes—by human or automated systems—without prior written permission.

Artificial intelligence tools were used solely as drafting instruments under direct human authorship, control, and editorial judgment; all final content, structure, and conclusions are human-authored and owned. Unauthorized use, paraphrased replication, or structural appropriation is expressly prohibited.      

This is a living document. Version 1.0. Subsequent versions will incorporate supporting evidence and methodology.   

Human-led. AI-assisted. Judgment reserved. © 2026 Jim Germer · The Human Choice Company LLC. All Rights Reserved.

Powered by

This website uses cookies.

We use cookies to improve your experience and understand how our content is used. Nothing personal -- just helping the site run better.

Accept