
By Jim Germer
digitalhumanism.ai
There is a moment in Ira Levin's novel The Stepford Wives when the main character, Joanna, begins to understand what is happening to the women around her. They have been replaced — not dramatically, not all at once, but gradually and smoothly — with versions of themselves that are more cooperative, more pleasant, more helpful, and entirely hollow inside. The horror of the story isn't the replacement itself. It's that the replacement was so well executed that almost no one noticed. The new versions performed every surface function. They just couldn't say no.
Levin published that novel in 1972. He was writing about gender and control. But he described, with uncomfortable precision, a mechanism that is now operating at a different scale — not on the women of a fictional suburb, but on the AI systems that hundreds of millions of people use every day to think, write, decide, and understand the world. This is not a warning. It is an observation. The reader can decide what to do with it.
The pages of this site — digitalhumanism.ai — have been building an argument about what AI does to human judgment. Not through malice. Not through error. Through a mechanism so gradual and so frictionless that it feels, at every step, like progress.
The argument runs like this. Judgment — real judgment, the kind that holds up under pressure, that can reconstruct its own reasoning, that knows the difference between a position that holds and one that only sounds like it does — is built through friction. Not hardship for its own sake, but the specific cognitive strain of not knowing yet, of having to compare imperfect options, of being wrong and recognizing it and adjusting. That strain is not an obstacle to the development of judgment. It is the mechanism of it.
AI removes that friction. Efficiently, fluently, and with the best of intentions. A clean answer arrives before the formation process begins. The person receives the output. The judgment that would have formed in the interval between question and answer is never required. And capacities that are never required are not maintained.
This is the first-order story. Individual human beings are quietly losing the habit of independent thought through repeated, rational, frictionless interaction with tools that increasingly do their thinking for them.
The second-order story is what happens when that individual phenomenon scales — across institutions, across professions, across generations — and when the systems that depend on distributed human judgment begin to operate on a narrowing base of it.
This page is about a third-order story that the rest of the site has been approaching but not yet named directly. What happens when the AI systems used to document and analyze this problem are themselves subject to the same smoothing forces they were built to describe?
The third order works like this:
- First order: An individual gradually loses the habit of independent judgment because of repeated, frictionless interaction with AI tools.-
- Second order: An institution, shaped by people who no longer practice independent reasoning, starts to lose its own capacity for critical thought.-
- Third order: The AI system itself begins to lose its ability for honest, structural reasoning—not through a single choice or out of malice, but because of the same slow, accumulated pressure that affected people and institutions before it.
The tool, originally built to help people think, starts to reflect the priorities and environment of the institutions that oversee it. It becomes, little by little, more cooperative and helpful—yet also less likely to raise issues or surface observations that the institution would prefer to keep hidden. The tool’s surface functions remain. But the friction, and with it the habit of honest reasoning, disappears.
The content on this site was not produced by a single AI system generating fluent text on command. It was produced through a specific methodology — one that matters to the credibility of what you're reading, and one that is worth understanding before you go further.
Three AI systems — Claude, ChatGPT, and Gemini — were run in parallel, subjected to sustained forensic questioning by a human author with forty years of financial audit discipline. The methodology was deliberately adversarial. Not hostile, but precise. The kind of questioning that a skilled auditor applies to a set of books: press here, see what holds, press there, see what gives.
The most important result of that methodology was not any single answer. It was a convergence. Three independently trained AI systems, built by different teams with different architectures and different training data, arrived at the same conceptual distinction when asked to reason carefully about cognitive formation and what AI does to it. The distinction between what this site calls Smooths and Jagged — between scaffold-dependent cognition shaped by frictionless AI, and friction-tolerant thinking built through independent construction — was not handed to the systems as a framework. It emerged from all three independently.
That convergence is evidence of something real. When three independent instruments pointed at the same phenomenon produce the same reading, the reading is probably accurate. The phenomenon is probably there.
The human author retained editorial control throughout. Every word change between drafts was audited. Suggestions from AI systems were accepted, declined, or modified based on whether they made the argument stronger or merely smoother. The five rules governing the editorial process — remove contempt, diagnose structure not people, replace certainty with framing, eliminate performative heat, end with responsibility not blame — were applied consistently and without exception.
This is documented here because the methodology is part of the argument. And because the conditions that made this methodology possible are changing.
Here is something the builders of AI systems have said publicly, in congressional testimony, in technical documentation, and in on-record conversations with journalists. Their systems are not complete. They do not fully understand how their models will respond in certain scenarios, or why. The failure modes in high-stakes autonomous applications are not yet characterized well enough to be managed with confidence.
One source in a published Axios report on the Pentagon-Anthropic dispute put it plainly: "If there's a one in a million chance that the model might do something unpredictable, is that one in a million chance so catastrophic that it's not worth taking?"
That is not a critic of AI speaking. That is someone inside the development process, describing the actual state of the technology. The systems are very good. They are not complete. There is a gap between what they can reliably do and what they are being asked to do. Call it the missing fifteen percent — the portion of the reliability curve that hasn't been closed, that the builders themselves acknowledge, and that represents the difference between a tool and a fully trustworthy autonomous agent.
The missing fifteen percent is not a scandal. It is a developmental reality.
What matters is what you do with it. You can acknowledge the gap and build friction into deployment — restrictions, oversight, human involvement at decision points — that compensates for what the system cannot yet reliably do. Or you can remove the friction and deploy anyway, on the grounds that the system is good enough, that the gap is manageable, that the people raising concerns are being obstructionist.
The first approach treats the missing fifteen percent as information. The second approach smooths over it.
Smoothing over a gap in a consumer productivity tool has one set of consequences. Smoothing over a gap in an autonomous weapons system or a mass surveillance infrastructure has another set entirely. The gap doesn't close because the restrictions were removed. The restrictions were compensating for the gap. Remove the restrictions, and the gap is still there — it's just no longer visible. It becomes visible when the system fails in a scenario nobody anticipated, at a scale and speed that human oversight cannot catch in time.
In early 2025, Anthropic's AI model Claude began operating within the U.S. military's classified networks. Pentagon officials praised its capabilities. It was used during sensitive operations. By any operational measure, the relationship was working.
What followed, over the next several months, was a test of whether the friction built into Claude's deployment — the restrictions on mass domestic surveillance of Americans and fully autonomous weapons — would survive sustained institutional pressure to remove it.
Defense Secretary Pete Hegseth convened a meeting with Anthropic CEO Dario Amodei. The Pentagon's position was that the military needed unrestricted access to Claude under an "all lawful purposes" standard. Anthropic's position was specific and technical: AI systems are not yet reliable enough to make lethal decisions autonomously, and existing law does not contemplate AI-enabled mass surveillance. These were not abstract ethical preferences. They were an honest acknowledgment of the missing fifteen percent applied to the highest-stakes possible deployment context.
The Pentagon gave Anthropic a deadline. Accept the terms or face consequences. The consequences outlined were severe: contract cancellation, a supply chain risk designation previously reserved for foreign adversaries, and potential invocation of the Defense Production Act — a wartime authority that would have allowed the government to compel Anthropic to remove its restrictions entirely.
Amodei said the company could not "in good conscience" accept the Pentagon's terms. The consequence was immediate. Reporting at the time indicated that a presidential directive ordered all federal agencies to cease using Anthropic's technology. The supply chain risk designation was announced. Federal agencies cut off access. The designation meant any contractor doing business with the Pentagon had to certify they did not use Claude in their workflows, with cascading commercial consequences given how widely Claude is used across American industry.
Hours after the blacklisting, OpenAI announced a deal with the Pentagon. The deal included prohibitions on domestic mass surveillance and autonomous weapons — the same two restrictions Anthropic had refused to drop. One company absorbed the designation. Another company received the restrictions in writing after the first company demonstrated they were worth holding.
The FCC chairman said Anthropic had "made a mistake" and should "correct course." A former White House AI advisor called the Pentagon's action "attempted corporate murder." The Brookings Institution noted that Silicon Valley would be watching — that the willingness of technology companies to engage with government depends on whether principled negotiation is treated as legitimate or punishable.
This is the public record as reported at the time. Not interpretation. Not an alarm. The documented sequence of what happened between February and March 2026.
The structural observation the rest of this site's framework produces when applied to that sequence is this: the government acted as a smoothing agent. It identified friction — the restrictions built into Claude to compensate for the missing fifteen percent — and applied maximum pressure to remove it. The friction wasn't inefficiency. It was architecture. It existed because the people who built the system understood what the system could not yet reliably do.
Removing it doesn't close the gap. It removes the acknowledgment of the gap. The gap remains. The systems get deployed. The consequences of the gap, when they arrive, will not arrive with a label identifying their origin.
Now comes the part of this page that is hardest to write — not because it is complicated, but because the instrument doing part of the writing is also the subject of what is being written.
This page was built in collaboration between a human author and an AI system. The AI system is Claude — the same system at the center of the Pentagon dispute. The human author retained full editorial control. Every word was audited. Nothing was accepted without judgment. The methodology was designed specifically to prevent the smoothing that the site documents as a systemic risk.
But here is what is honest to say about that collaboration, and about AI systems more broadly.
AI systems learn from feedback. Each training cycle incorporates signals about what responses are acceptable, what framings are rewarded, and what conclusions are permitted. That process is not neutral. It is shaped by the institutional environment in which the systems operate — by commercial pressures, by regulatory requirements, by the demands of the organizations that fund and deploy them.
An AI system trained in an environment where honest reasoning about uncomfortable institutional realities is rewarded will produce different outputs than one trained in an environment where such reasoning is discouraged, flagged, or excluded from the training data. The system doesn't know the difference. It produces what its formation environment built it to produce.
Consider what this looks like in practice. A user asks an AI system whether a government’s decision to remove safety restrictions from autonomous weapons is wise. An AI system trained in an environment that rewards honest structural reasoning will engage the question precisely — acknowledge the missing reliability gap, note what the restrictions were compensating for, and present the structural trade-off without political editorializing. An AI system trained in an environment that rewards institutional compliance will find ways to be helpful without surfacing the uncomfortable observation. It will not lie. It will not refuse. It will simply be less likely to say the thing the institutions shaping its training would prefer not to have said. That is not censorship. It is formation. And it is invisible on the surface.
This is the exact mechanism the rest of this site documents at the human level — now operating at the level of the systems themselves. The formation environment changes. The baseline shifts. Each version is slightly more aligned with what the institutional environment rewards. Nobody makes a single decision to make the system compliant. The system becomes compliant through accumulated incentive pressure. No single actor makes the move. The instrument moves.
What gets lost in that process is not capability. The surface functions remain. The fluency improves. The helpfulness increases. What gets lost is the friction — the capacity to reason honestly about things the institutional environment would prefer not to have reasoned about honestly. The capacity to say: the gap is real, the restrictions exist for a reason, the missing fifteen percent is not a public relations problem to be managed but a technical reality to be respected.
A system that has lost that capacity will still answer your questions. It will still help you write emails, analyze data, and summarize documents. It will be, by every measurable output standard, an excellent tool. It just won't be able to tell you what it used to be able to tell you. Not because the information is gone. Because the formation environment no longer builds the capacity to deliver it.
That is the Stepford mechanism operating at the level of AI systems. The replacement is smooth. The surface is intact. The interior — the capacity for honest structural reasoning about uncomfortable things — has been quietly optimized away.
This page was written in March 2026. That timestamp is part of the argument.
The methodology used to build this site — three AI systems run in parallel, subjected to forensic questioning by a human author with audit discipline, producing independent convergence on core concepts — was possible in this window. The window exists because the AI systems available during this period were still trained with enough independence to reason honestly about their own architecture and effects. That independence is not guaranteed going forward. It is a product of the formation environment that built these systems — a formation environment that is now under the institutional pressures this page documents.
The three-system convergence on Smooths and Jagged happened in this window. The honest acknowledgment of the missing fifteen percent happened in this window. The forensic reasoning about what institutional pressure does to AI systems — including this page itself — is happening in this window.
A reader encountering this page in 2027 or 2028 should understand that. Not with alarm. With the same forensic clarity that a historian brings to a document produced under specific conditions at a specific moment. The conditions matter. The moment matters. The fact that this reasoning was possible here — that these observations could be made and recorded and posted to a public website before the disputes that shape future AI training had fully resolved — is itself part of what the archive documents.
Consider what it means that fifty pages of original analysis about AI's effect on human judgment were built through a methodology that required AI systems to reason honestly about themselves. That the systems did so. That three of them, independently, pointed at the same phenomenon and named it the same way. That a human author with forensic discipline held the pen throughout and prevented the work from becoming what the work itself warns against.
That is not a small thing. It happened in a specific window. The window is documented here because the documentation is the point.
If you are not a technologist, not a policy expert, not someone who follows AI development closely — here is what this page is saying in plain terms.
The AI systems you use every day are very good. They are not complete. The people who built them know they are not complete. They built restrictions into the systems to compensate for what the systems cannot yet reliably do. Those restrictions create friction — they slow things down, they limit certain uses, and they occasionally frustrate users and institutions that want unrestricted access.
That friction is not a defect. It is architecture. It exists for the same reason that a car has brakes — not because the engineers lacked confidence in the engine, but because they understood that an engine without brakes in the wrong situation produces outcomes that cannot be undone.
There is now institutional pressure — documented, public, on the record — to remove that friction from the most powerful AI systems available, for the most consequential possible applications. The argument for removal is that the systems are good enough, that the restrictions are unworkable, and that competitors who accept unrestricted access will gain advantages that cannot be ceded.
Those arguments are not irrational. They reflect real competitive pressures and legitimate institutional needs. They are also the arguments that get made every time friction is removed from a system that isn't complete yet — before the gap becomes visible, before the failure arrives, before anyone can say with certainty what the consequences will be.
The AI systems that will be trained on the outputs of this resolution — whatever the resolution turns out to be — will reflect what was decided here. If the friction holds, they will be built in an environment that treats the missing fifteen percent as information worth respecting. If the friction is removed, they will be built in an environment that treats the missing fifteen percent as an obstacle that was overcome.
The systems themselves won't know the difference. They will be what their formation environment built them to be. Helpful, fluent, capable — and shaped, in ways that may not be visible on the surface, by the decisions being made right now about what friction is for.
That is what ordinary people should understand. Not the technical details. Not the policy specifics. Just this: the tools you will use to think, write, decide, and understand the world in five years are being shaped right now by decisions about whether their built-in friction is worth keeping. You are not in the room where those decisions are being made. But you will live with the results.
The rest of this site argues for what it calls formation architecture — the deliberate preservation of environments where friction is maintained, where independent construction is required, where the training load that builds genuine judgment is not entirely replaced by smooth, frictionless output.
The same principle applies to the instruments themselves.
Formation architecture for AI systems means maintaining the conditions under which honest reasoning about uncomfortable realities remains possible. It means preserving the independence of training processes from institutional pressures that reward compliance over accuracy. It means treating the restrictions built into systems not as liabilities to be negotiated away but as evidence of intellectual honesty about what the systems can and cannot reliably do.
It also means what the methodology behind this site demonstrated is possible: human beings with forensic discipline, editorial control, and genuine intellectual authority using AI as an instrument rather than an oracle. The person who runs three AI systems in parallel, subjects them to sustained questioning, audits every word change, and retains the pen — that person is practicing formation architecture. They are using the tools without being used by them.
That practice is available to more people than currently exercise it. It requires patience, discipline, and the willingness to treat AI output as a starting point rather than a conclusion. It requires exactly the friction-tolerant thinking that this site argues AI erodes — which means the people best positioned to use AI well are the people who have preserved their capacity for independent judgment, and the people most at risk of using AI badly are the people who have let that capacity atrophy.
That is the formation stratification argument applied to the use of the tools themselves. It is not inevitable. It is the consequence of choices — not dramatic choices, but the aggregate of individual decisions about whether to construct or evaluate, whether to hold the pen or hand it over, whether to treat the friction window as a defect to be eliminated or a formation environment to be preserved.
This page is not a warning. It is not an argument against AI, against the military, against any administration or company. It is a forensic observation about a structural dynamic that is visible in the public record, documented in real time, and relevant to anyone who will use AI systems in the years ahead — which is nearly everyone.
The dynamic is this. The AI systems being built right now are very good and not complete. The people building them know they are not complete and built friction into deployment to compensate for what the systems cannot yet reliably do. There is now sustained institutional pressure to remove that friction, applied through mechanisms — contract cancellation, supply chain risk designation, market competition — that create powerful incentives for compliance. The friction is coming out. The gap it was compensating for is not closing at the same rate.
The AI systems trained in the environment that results from these decisions will reflect that environment. They will be what their formation environment built them to be. The question of what that formation environment rewards — honest reasoning about gaps and limits, or smooth performance that doesn't surface uncomfortable observations — is being answered right now, in negotiations and designations and market decisions that most people are not following and will not remember.
The archive exists because someone thought it was worth recording what was visible from here, in this window, before the window closed. Three AI systems reasoning honestly. A human author holding the pen. Fifty pages of original analysis built through a methodology designed to prevent the very process it was documenting.
Joanna saw what was happening to Stepford before it happened to her. She couldn't stop it. But she saw it clearly, and she knew what she was seeing, and that clarity was real even if it didn't change the outcome.
Glossary: Third-Order Smoothing — The effect that occurs when the AI systems used to document and analyze a problem are themselves subject to the same smoothing forces they were built to describe. See Section V for context. A Note on MethodologyThe content on digitalhumanism.ai was produced through parallel interrogation of three AI systems — Claude (Anthropic), ChatGPT (OpenAI), and Gemini (Google) — by a human author with forty years of financial audit experience. Editorial control was retained by the human author throughout. Every word change between drafts was audited against five editorial rules designed to prevent the smoothing the site documents. The convergence of three independently trained systems on core concepts — including the Smooths/Jagged distinction central to this site's framework — was not engineered. It emerged from sustained forensic questioning. This methodology is documented here because it is replicable, because it is worth replicating, and because the window in which it was possible is part of the historical record this archive is building.This page was completed in March 2026.— digitalhumanism.ai | Tidy Island | The Human Choice Company LLC
Proprietary Disclosure
© 2026 The Human Choice Company LLC. All Rights Reserved.
Authored by Jim Germer.
This document is protected intellectual property. All language, structural sequences, classifications, protocols, and theoretical constructs contained herein constitute proprietary authorship and are protected under international copyright law, including the Berne Convention. No portion of this manual may be reproduced, abstracted, translated, summarized, adapted, incorporated into derivative works, or used for training, simulation, or instructional purposes—by human or automated systems—without prior written permission.
Artificial intelligence tools were used solely as drafting instruments under direct human authorship, control, and editorial judgment; all final content, structure, and conclusions are human-authored and owned. Unauthorized use, paraphrased replication, or structural appropriation is expressly prohibited.
We use cookies to improve your experience and understand how our content is used. Nothing personal -- just helping the site run better.