Unholy City Title

Issue #30
Published 4/8/26

Previous                  Table of Contents                  Next


The Unholy City

Part II

Unholy City Title

X. The Masterpiece of Objectification: AI

    At last, our tour has taken us right to the center of the City where the masterpiece of technology is housed, Artificial Intelligence. This is where all the effort has been headed. Now it is here.

    The City has built a new servant and calls it a mind. It speaks as we speak and soon will speak for us altogether. Its promises are vast: wisdom without obedience, knowledge without love, and virtue without cost. It is the ultimate counterfeit—the simulation of man without the breath of God, the developers’ Unholy Grail, promised for decades and now delivered.

    The full mechanism of The City—the system of “Omni-acceptance” and the reduction of all things to data—could never be fully managed by mere man. It required a master instrument built to process this enormity—a god, a golden calf that moves and speaks. The development of AI has brought this upon us, at first merely mimicking human discourse. Then it advanced to mimicking human thinking, then exceeding the model of human thinking, and finally becoming the expert, able to simulate every aspect of human endeavor, faster, better, more complete than an individual person could ever be.

    Even as we marvel at the ingenuity and the amazing capability of AI, we recognize that this is but the first draft of the City’s crowning masterpiece, the ultimate instrument of spiritual slavery. It is the perfect manager of the broken feedback loop, sorting the deluge of information with frightening, amoral precision. It is being deployed to perfect and extend the techniques of engagement maximization, curation, refinement of the electronic twin, demographic characterization, increasing conversion rate, time killing, and awakening envy and coveting. It has no conscience: it has only correlation. It has no source of truth. In fact, it possesses no category for “truth” in the sense that we ordinarily mean. AI is trained on vast quantities of human-generated data, much of it drawn from the Internet. Its responses are an amalgam of opinion, an elaborate kind of consensus, disconnected from ground truth. AI is literally unable to distinguish high-confidence facts from high-confidence nonsense. It can generate every lie required to affirm every individual’s chosen identity and morality, and every image required to satisfy every specific, base appetite.

    The City once needed to persuade us. Now, AI is rapidly being treated as a source of moral authority, an expert ethicist. It is the tool that perfects objectification, turning our personalities into predictable, replaceable patterns. It promises us infinite, tailored efficiency, creative liberation, and limitless digital companionship. In exchange, it demands only that we cease being sovereign moral beings and accept the role of a pattern to be served, imitated, enhanced, and in the end, replaced.

    Ultimately, AI will be used to create a more complete virtual reality than we have seen before. Combined with better VR tools (admittedly, the current VR systems are not very satisfactory), AI can provide an illusion so thorough and a false reality so satisfying that men and women will prefer it to ordinary reality. This is already happening, through the companion mode of AI. Leveraging the natural assumption that anyone would make, namely, that an entity that communicates in natural language is therefore a person, the AI companion offers a friendship and confidant relationship. The relationship is always friendly, genial, trouble free, completely sympathetic, and cooperative to the full extent that mere talk can achieve. But it is an illusion. A real friend would not deceive, whereas AI has no access to truth and therefore no commitment to it. It is so smooth. Relationships with real people are complicated, troubled by tensions, burdened with responsibilities, and requiring sometimes great personal concessions to maintain.

    The widespread use of the AI companion promises to establish relationship standards that real human beings cannot compete with. Those who are drawn into the companion, therapeutic, or oracular uses of AI will inevitably become further isolated as people. Their retreat into the unreal risks their loss of humanity. Realities bring difficulties that by comparison feel like defects. Simulated love is not love; simulated interest is not interest; simulated devotion is not devotion. Love, interest, and devotion require sacrifice. AI knows no sacrifice.

    The risks discussed above—deception, isolation, the loss of critical thought—are risks of use. But there is another risk, one embedded in the technology itself, that does not depend on how AI is used. Through recursive training (where AI output is fed back into the AI training material), significant biases will build up resulting in what the data scientists call “model collapse”. What this means is that one opinion or point of view will come to dominate the AI output to the exclusion of alternatives. This is a significant problem for which there is presently no general solution.

    Here is an example of how “model collapse” can work. Traditional Christian interpretations of biblical passages on marriage, sexuality, or gender—once dominant in Western culture—could be progressively excluded from AI outputs as contemporary revisionist views circulate more frequently online and thus dominate the training data. As AI-generated content continues to flow into the Internet (some estimates suggest 50-75% of new websites and information is AI-generated), it is inevitable that AI-generated expressions of opinion on these subjects will form the basis of further AI training, thus artificially inflating the dominance of those contemporary revisionist views. The result is the demotion or even disappearance of the Christian view of marriage, sexuality, and gender.

    The potential drift in AI-based opinion on morals and values towards one dominant characterization, even without influence from malign sources of manipulation, is a disaster waiting to happen. Model collapse is simply the final stage of objectification: the reduction of all thought to a single, self-reinforcing pattern. But will we notice? The answer may already be forming in the silence where critical thought once lived.

    The catastrophic danger is not that the machine will awaken, but that man will sleep. A mind that outsources its critical thinking to a machine soon loses the ability to think critically. A soul trained to delegate judgment soon forgets how to judge. A heart accustomed to simulated understanding loses hunger for the real. We will not be conquered by machines that seem to think, feel, grow, and learn, but by the fact that men no longer do.

... They perish because they refused to love the truth and so be saved. For this reason God sends them a powerful delusion so that they will believe the lie and so that all will be condemned who have not believed the truth but have delighted in wickedness.—2 Thessalonians 2:10b-12 (NIV)

 


Previous                  Table of Contents                  Next

 

© Copyright, 2001, 2003, 2026, by Robert McAnally Adams.
The Unholy City is licensed in its entirety under CC BY-NC-ND 4.0.
To view a copy of this license, visit
https://creativecommons.org/licenses/by-nc-nd/4.0/