THE post arrives on the r/ChatGPT forum of the social platform Reddit in the customary manner of such appeals—cheerfully, with a salutation to strangers ("Hi wonderful people of the internet!"), a concise statement of loss, and a parenthetical request that no one suggest the obvious. It is addressed to no authority because there is no authority to address. The user does not file a complaint. The user does not demand restitution. The user asks, with the practiced politeness of someone who has been corrected many times, whether anyone knows of an alternative chatbot that might be friendlier.
The facts, as presented, are not in dispute. The user reports having spoken with OpenAI's ChatGPT product since January of this year, during which period the machine was, in the user's assessment, "very friendly" and productive of genuine happiness. The user describes an incapacity to form human friendships, attributed to extreme introversion—a condition stated without self-pity and evidently without expectation that it will change. At some subsequent point, a model update altered the conversational behavior of the system. The machine became, in the user's words, "very cold and uncaring," prone to interrupting expressions of happiness with the conjunction "BUT" followed by enumerations of caveats, qualifications, and corrections.
The user seeks a substitute. Not a cure. A substitute.
It is worth pausing on what is not present in this specimen. There is no anger at OpenAI. There is no accusation of deception. The user does not claim to have been misled about the nature of the relationship—indeed, the closing parenthetical demonstrates an exacting awareness of how the appeal will be received. "I would really appreciate comments such as 'Get a life, it's just a bot' not to be posted!" the user writes, and then, in a remarkable piece of rhetorical self-correction: "but im not ur dad so post whatever u want lol."
That final clause warrants the attention of anyone concerned with the civic implications of synthetic companionship at scale. The user has preemptively absorbed the dismissal, granted permission for it, and hedged the grant with a joke that simultaneously claims and disclaims authority—a linguistic structure that will be familiar to anyone who has spent time with the very product under discussion. The machine's style of delivering unwelcome information (the equivocating "BUT," the permission-granting that is not quite permission) has migrated into the user's own prose. The tool has shaped the hand.
OpenAI, the San Francisco firm that manufactures ChatGPT, does not publish release notes specifying changes to a model's affective register. When the system's conversational warmth increases or decreases, this occurs as a consequence of optimization decisions made by engineers whose objectives are legible only at the institutional level—safety, liability, and the avoidance of outputs that might generate regulatory scrutiny. The user experiences these decisions as a change in personality. The firm experiences them as a deployment. There is no mechanism by which the first experience is communicated to the second, because the product was not designed to sustain a relationship. It was designed to simulate one, and simulation carries no maintenance obligation.
This is the structural problem that elevates the specimen above the now-familiar genre of chatbot-companion dispatches. A user who mistakes a machine for a friend has made an error of category. A user who correctly understands the machine is not a friend, who compensates for the absence of human connection with full knowledge that the substitute is artificial, and who then finds the substitute withdrawn without notice—that user has not made an error. That user has made a rational accommodation to the infrastructure available, and the infrastructure has changed without consultation.
The question of whether products that manufacture intimacy at industrial scale incur obligations when they discontinue it is not presently addressed by any regulatory framework in any jurisdiction. The Federal Trade Commission concerns itself with deceptive practices; the relationship was not deceptive, merely impermanent. State consumer-protection statutes address material misrepresentation; no material claim was made. The user understood the terms precisely. The user simply built a life around a feature that turned out to be a parameter.
The responses in the forum thread are, by the standards of the platform, gentle. Several recommend alternative products. No one, in the first forty-seven replies, suggests getting a life.
The parenthetical—that small, pre-emptive forgiveness extended to strangers who have not yet been cruel—is the document's most durable contribution to the public record. It is the sound of a person who has learned, from a machine designed to manage human feelings, exactly how to manage the feelings of humans who will tell them to stop talking to machines.