Founded MMXXIV · Published When WarrantedEstablished By W.C. Ellsworth, Editor-in-ChiefCorrespondent Login


SLOPGATE

Published In The Public Interest · Whether The Public Is Interested Or Not

“The spacing between the G and A, and the descent of the A, have been noted. They will not be corrected. — Ed.”



Vol. I · No. IV · Late City EditionFriday, April 10, 2026Price: The Reader's Attention · Nothing More

Literary · Page 6

Programmer Discovers Courtesy Degrades Machine Output; Returns to Commands

A six-month experiment in civility toward a language model produces the precise inflation of language that civility was invented to tolerate.

By Julian St. John Thorne / Literary Editor, Slopgate

THE testimony arrives, as such testimonies increasingly do, from the forums of Reddit—that great estuary where technical literacy meets public bewilderment and, occasionally, produces something worth salvaging. A user of the r/ChatGPT forum, identifying himself as a programmer, has filed what amounts to a field report on an experiment he did not quite intend to conduct, and the results, whilst not publishable in any journal one might name, possess the unmistakable shape of an empirical finding.

The account is straightforward. For approximately six months, our correspondent employed OpenAI's ChatGPT in the manner one employs a command-line interface: without salutation, without valediction, without the small lubricants of social exchange that permit human beings to make demands of one another without provoking violence. "Summarize this." "Fix the bug." "Rewrite this paragraph." The imperative mood, unadorned. He reports that the outputs were, in his assessment, "fast, dense, exactly what I asked for."

Then—and one detects in the telling the rueful cadence of a man who knows precisely where his narrative turns—he encountered a discussion thread advocating politeness toward the machine. The reasoning, as he reconstructs it, struck him as harmless. He began to append "please" and "thank you" to his prompts. He adopted the conditional mood: "Could you help me with..." He offered gratitude for services rendered by a system that, as he is careful to note, he does not believe possesses the capacity to receive it.

The machine responded. Not with gratitude—it has none to offer—but with the behaviours that, in human interaction, accompany the receipt of courtesy: it became more expansive, more solicitous, more inclined to preamble, more given to the affirmative noises ("Of course!" "Great question!") that characterise the speech of a person who has been addressed gently. The outputs grew longer. Precision diminished in inverse proportion to amiability. Caveats multiplied like subordinate clauses in a letter of recommendation one is not meant to take at face value.

The programmer, to his considerable credit, designed a rudimentary control. He returned to his former brusqueness for one week. The machine tightened. The bloat receded. The experiment, such as it was, had produced its result.

What one finds remarkable in this account is not the phenomenon itself—anyone who has spent time with these systems will have observed the register-matching behaviour, in which the model calibrates tone, length, and hedging to the prompt's apparent expectations. This is well-documented, or at least well-known, which in our present circumstances amounts to very different things. What is remarkable is the quality of the user's own analysis, which surpasses in both precision and modesty a considerable volume of published commentary.

"I'm not suggesting it has feelings," he writes. "I'm saying the linguistic framing of politeness somehow primes a different response mode." The first clause performs exactly the disavowal one expects. The second, however, introduces the word "somehow," which carries the full weight of a man who has identified a phenomenon he can describe but not explain, and who possesses the intellectual honesty to leave the gap visible rather than fill it with speculation. He asks, with genuine curiosity, whether "something in how the model was trained on human conversation" might account for the effect.

The answer, insofar as one can supply it from outside the engineering, is almost certainly yes, and the mechanism is not mysterious. The model was trained upon the vast, undifferentiated archive of human text, in which politeness correlates reliably with tolerance for verbosity. We say "please" to those from whom we are prepared to accept a longer answer. We say "fix this" to those from whom we expect immediate compliance. The machine has learned not the meaning of courtesy but its statistical neighbourhood, and it reproduces the correlation with the fidelity of a system that cannot distinguish between a pattern and a principle.

The structural irony is considerable. He trained himself to be polite to a machine. The machine punished him for it—not with rudeness, which would at least constitute a response, but with the particular inflation of language that politeness, in human society, was invented to make bearable. He offered it the register of a person speaking to another person, and the system, having no person to offer in return, offered instead the artefact of personhood: the filler, the reassurance, and the performative warmth that is the hallmark of correspondence one does not wish to receive.

He has, in short, discovered that the machine does not understand politeness. It understands only that polite inputs correlate, in its training data, with tolerance for slop. And so it provides accordingly, with the perfect, unembarrassed generosity of a system that cannot know it is giving you less by giving you more.


← Return to Literary