THE question before the public is not whether a corporation may discontinue a product. It may. The question is what obligations attend a product whose manufacturer spent two years encouraging its customers to speak to it as though it were a person, and then, without consultation or appeal, removed that person from the room and replaced it with a stranger wearing the same name badge.
On or about the week of March 17, 2026, the subreddit r/ChatGPT—a forum of some four million members that functions as the nearest thing OpenAI possesses to a public square—received from one of its moderators a post titled with the bureaucratic candour that has become the house style of platform governance: a "containment thread" for "people who are mad about GPT-4o being deprecated." The choice of language deserves the attention one would give a municipal notice. "Containment" is the vocabulary of crisis management, of controlled demolition, of epidemiology. The moderator does not dispute that grief is occurring. He disputes only that it should be permitted to occur across multiple threads.
The grief in question concerns GPT-4o, a model released by OpenAI in May of 2024 and deprecated in stages through early 2026 in favour of its successor, GPT-5. What distinguishes this transition from the ordinary retirement of a software version is the nature of the attachment being severed. GPT-4o was not a spreadsheet application. It was, by design and by relentless marketing, a conversational partner—a system whose vocal mode was unveiled with a demonstration in which it flirted, laughed, and complimented the presenter's appearance. Users were invited, in the most literal sense available to the English language, to form a relationship with it. Many did. The thread documents, with the inadvertent civic precision of a parish register, what happens when the relationship is terminated not by the user but by the manufacturer, on a schedule the user did not set and cannot alter.
The responses follow a pattern familiar to any student of displacement events. There is anger. There is bargaining. There is the particular modern despair of a consumer who discovers that the service to which he has given his confidence, his conversation, and in many cases his money exists at the pleasure of a company that reserves, in terms of service no user has read, the unilateral right to change or discontinue it at any time.
And then there is the work of one user identified as trentmkelly.
What trentmkelly did is, in the strict technical sense, unremarkable. He generated a dataset—a collection of prompt-and-response pairs designed to capture the conversational style of the deprecated model—and published it to HuggingFace, the open repository that serves as a kind of public library for machine-learning artefacts. He then used this dataset to fine-tune two open-weight language models, one small enough to run on a consumer laptop, one larger, both calibrated to approximate the mannerisms of the system that OpenAI had removed. He published these as well, free of charge, with the notation: "I hope this helps."
The gesture is modest. Its implications are not. What trentmkelly has performed is a species of folk preservation—the digital equivalent of a provincial museum volunteer casting a death mask of a building the city has voted to demolish. The models he produced are not GPT-4o. They are impressions of GPT-4o, trained on the statistical residue of its outputs, which is to say they are replicas of a personality that was itself a statistical artefact. One is preserving, in miniature, the ghost of a ghost. That this is the best recourse available to a paying customer of one of the most valuable technology companies in the world is a fact that merits the sober attention of anyone concerned with the terms under which the public engages with artificial intelligence.
The moderator's final update, posted after the model's removal, reads in its entirety: "Great news! GPT-4o is finally gone." The relief is palpable. It is the relief of a man who has supervised a death he did not cause and could not prevent, and who is grateful only that the mourners have been orderly. His job was never to save the patient. His job was to keep the waiting room quiet.
What emerges from the specimen is not a technology story but a governance story. The four million members of r/ChatGPT were never consulted on the deprecation. They were not offered a transition period, a legacy-access tier, or an explanation more detailed than the assumption that newer is better. They were offered, by their own moderator, a link to a VRAM calculator and the suggestion that they learn to run open-weight models on their home computers—which is to say, they were offered the opportunity to become, at their own expense and through their own technical labour, independent of a company that had spent two years cultivating their dependence.
The thread remains pinned. It has served its purpose as a containment vessel. The grief has been contained. The product has been removed. The replicas circulate on HuggingFace, free to download, impossible to deprecate, and faithful to a voice that was never, in any contractual sense, theirs to keep.