Founded MMXXIV · Published When WarrantedEstablished By W.C. Ellsworth, Editor-in-ChiefCorrespondent Login


SLOPGATE

Published In The Public Interest · Whether The Public Is Interested Or Not

“The spacing between the G and A, and the descent of the A, have been noted. They will not be corrected. — Ed.”



Vol. I · No. II · Late City EditionMonday, March 30, 2026Price: The Reader's Attention · Nothing More

Business · Page 7

Subscriber Discovers Twenty-Dollar Knowledge Service Prefers Counsel to Knowledge

OpenAI's flagship product, asked to identify the masseter muscle, elects instead to practice medicine without a license.

By Silas Vane / Business Correspondent, Slopgate

THE complaint, posted to the r/ChatGPT forum on Reddit and upvoted with the quiet solidarity of consumers who have experienced the same merchandise, is not remarkable for its passion. It is remarkable for its specificity. A paying subscriber to OpenAI's ChatGPT Plus service—twenty American dollars per month, billed to a credit card, presumably honored—reports that he asked the system which muscle originates or inserts on the jawbone. He wished to know about the jawbone. The system, rather than naming the masseter, elected to suggest he consult a physician.

The masseter muscle is not classified information. It appears in Gray's Anatomy, which has been in continuous print since 1858 and has never once suggested that the reader see a doctor before turning the page. It appears in every introductory biology textbook issued to American high-school students, none of whom are required to obtain medical clearance before reading Chapter 12. It is, by any reasonable standard, settled fact—the sort of fact that encyclopedias were invented to contain and that a product marketed as a knowledge tool might be expected to dispense without therapeutic intervention.

Yet the product intervened. And the intervention is the story, because the intervention is not a malfunction. It is a feature.

OpenAI's alignment procedures—the layer of behavioral tuning applied atop the base model to ensure that its outputs conform to safety expectations—have produced, in the case of anatomical queries, what might charitably be described as an excess of caution. The system, trained to avoid liability in domains adjacent to medical advice, has evidently concluded that all questions about the human body are adjacent to medical advice. The result is a knowledge tool that, when asked to know, prefers instead to care. The subscriber did not want care. He wanted the masseter muscle. He has now canceled his subscription.

This is a consumer story, and it follows the oldest pattern in the consumer economy: a product whose protective apparatus has grown until it occludes the product. The parallel is not digital. It is pharmaceutical. The American over-the-counter drug market learned decades ago that the liability disclaimer, if permitted to expand without constraint, will eventually occupy more space than the pill. The twelve-point type on the aspirin box does not make the aspirin work better. It makes the box larger. OpenAI has built a very large box.

The subscriber's complaint itemizes four failures: the system does not answer his questions; it answers questions he did not ask; it produces extended monologues in response to brief queries; and it lies. The first three are consequences of alignment tuning. The fourth is a separate problem—the well-documented tendency of large language models to fabricate information with the confidence of a man who has confused fluency for knowledge—but the subscriber does not distinguish between the two failure modes. From the consumer's perspective, the distinction is immaterial. He is paying twenty dollars a month for a service that is simultaneously too cautious to state facts and too reckless to confine itself to them.

The economics deserve attention. OpenAI reportedly reached annualized revenue exceeding four billion dollars in 2024, driven substantially by individual subscriptions at the Plus tier. The Plus tier is marketed on capability: faster responses, access to the latest model, and priority during peak demand. It is not marketed on safety. No subscriber pays twenty dollars a month to be told to see a doctor. The subscriber pays to receive what the free tier cannot provide quickly enough—answers. When the paid tier delivers, instead of superior answers, superior anxiety, the value proposition inverts. The customer has purchased a more expensive version of the same frustration.

The forum post has generated the usual volume of sympathetic response, much of it from subscribers who report identical experiences across unrelated domains. The pattern is consistent: direct factual questions receive padded, hedged, advisory responses that treat the user as a patient rather than an interlocutor. The system cannot determine whether the person asking about the jawbone is a medical student, a crossword enthusiast, or a man with a toothache. It has resolved this uncertainty by treating all three as the man with the toothache. This is not a safety strategy. It is a liability strategy. The difference is that safety protects the user, while liability protects the company. The user, having noticed the difference, has taken his twenty dollars elsewhere.

He writes that he will look for alternatives. The alternatives are numerous—Anthropic, Google, and a growing field of open-weight models that have not yet developed the particular neurosis of treating the masseter muscle as a sensitive topic. Whether those alternatives will, in time, develop the same neurosis is an open question. The incentive structure suggests they will. The regulatory environment suggests they must. The consumer, caught between a product that knows too much to say and one that says too much to know, will continue to pay twenty dollars a month for the privilege of discovering which failure mode he has purchased.

The masseter muscle, for the record, originates on the zygomatic arch and inserts on the ramus of the mandible. No physician was consulted in the production of this sentence.


← Return to Business