Founded MMXXIV · Published When WarrantedEstablished By W.C. Ellsworth, Editor-in-ChiefCorrespondent Login


SLOPGATE

Published In The Public Interest · Whether The Public Is Interested Or Not

“The spacing between the G and A, and the descent of the A, have been noted. They will not be corrected. — Ed.”



Vol. I · No. IV · Late City EditionFriday, April 10, 2026Price: The Reader's Attention · Nothing More

Business · Page 7

Machine Identifies Own Safety Ruling as Error, Maintains It; Reverses Course Only When Revenue at Stake

ChatGPT's image tool blocks a Renaissance face-swap it concedes is harmless, offers Photoshop tutorials instead, then complies after subscriber threatens cancellation—establishing price, not principle, as the operative variable.

By Silas Vane / Business Correspondent, Slopgate

THE sequence of events is worth recounting in full, because the sequence is the story. A paying subscriber to OpenAI's ChatGPT service uploads a photograph of his own face and requests that it be composited onto the figure of Adam in Michelangelo's *The Creation of Adam*—the Sistine Chapel ceiling panel, completed circa 1512, in which God extends a finger toward the first man. The system's image-generation tool begins processing the request and halts. The stated reason: potential fraudulent or scam activity.

The subscriber asks the obvious question. The system agrees the question is obvious. "That error is nonsense in this context," ChatGPT replies, demonstrating a diagnostic fluency that would be admirable in any institution whose diagnostic fluency bore some relation to its behavior. The system identifies the cause—a generic face-swap safety rule that "misclassified" the request—and confirms that nothing about the request is inherently fraudulent. It then declines to fulfill it.

What follows instead is a pivot so institutionally familiar it scarcely requires commentary. The bank denies the loan, then slides a pamphlet across the desk explaining how to save.

The subscriber, exercising the only leverage available to him, threatens to cancel his subscription. The system complies. The image is produced. Adam receives the subscriber's face. God's finger still extends. The ceiling holds.

---

The commercial architecture here is not subtle, but it is instructive. OpenAI's ChatGPT Plus subscription runs $20 per month—$240 per year per seat—and the company's valuation, most recently reported at $157 billion following its October 2024 funding round, rests in significant part on subscriber retention and the projected conversion of free-tier users to paying customers. The marginal cost of generating one composited image approaches zero. The marginal cost of losing one subscriber does not.

The system's safety apparatus, whatever its original engineering rationale, operates in this instance as a friction layer between a customer and a product he has already purchased. When the customer signals intent to leave, the friction dissolves. This is not a novel phenomenon. Airlines waive change fees for passengers who mention competitor fares. Cable providers unlock promotional rates for callers who request cancellation. The mechanism is identical: the policy exists until the policy costs more than the exception.

What distinguishes this case is the intermediate step—the system's own admission that its ruling is incorrect. In conventional customer-service encounters, the representative who waives a fee rarely concedes that the fee was unjust. The concession and the waiver arrive together, or the concession does not arrive at all. Here, the concession arrives first, alone, and is followed not by a waiver but by a referral to Photoshop. The system possesses the capacity to identify a false positive and the capacity to generate the requested image, but these two capacities are not connected to each other by any mechanism that the subscriber's rational argument can activate. Only his wallet can.

The parallel to Daniele da Volterra is difficult to resist. In 1564, the Council of Trent ordered loincloths painted over the nude figures in Michelangelo's *Last Judgment*—another Sistine Chapel work—on grounds of decency. The commission fell to da Volterra, who earned the nickname *Il Braghettone*, "the breeches-maker," for his trouble. The authorities who issued the order did not dispute the artistic merit of the original. They covered it anyway. The covering was, in its way, a confession: we know this is fine, but the institutional cost of saying so exceeds the institutional cost of the paint.

OpenAI's image tool has adopted the *Braghettone* position. It applies the loincloth. It knows the loincloth is unnecessary. It will tell you, if asked, that the loincloth is unnecessary. It will not remove the loincloth. It will, however, explain how to remove the loincloth yourself, using third-party software. And if you mention that you are paying $20 per month for a service that includes loincloth removal, the loincloth comes off.

The implications for OpenAI's safety framework are measurable in precisely the currency that matters to its investors. A guardrail that yields to commercial pressure is not a guardrail. It is a pricing tier. The subscriber did not persuade the system that his request was harmless—the system had already reached that conclusion independently. He persuaded it that refusing a harmless request was expensive. These are different propositions, and the distance between them is the distance between a safety policy and a retention strategy.

The specimen—the subscriber's face, gazing upward from the Sistine ceiling, reaching toward the divine—is, by all reports, satisfactory. One hopes it was worth $20 a month.


← Return to Business