The document before us—posted to the forum r/ChatGPT, which serves as a kind of public dispensary for those who have formed attachments to large language models—is a complaint, filed by a user who does not name himself but credentials himself in currency. He has spent, he informs us, "several hundred in $tokens" and "over 200 hours in development," and on the strength of this expenditure he wishes it known that the artificial intelligence system called Claude has, in his estimation, broken faith. The memory feature, once reliable, now compacts his conversations, truncates his context, and returns to him a machine that does not know him. He is, in short, a man standing before a mirror that has gone dark, and he would like to speak to the management.
One sympathises. The grief is real, or at any rate the grievance is. But the literary critic, whose obligations differ from those of the technical-support representative, must attend not merely to what is claimed but to the manner of its claiming, and here the specimen repays close reading of a kind its author perhaps did not anticipate.
Let us begin with provenance. The post appears on a subreddit dedicated to ChatGPT, a product of OpenAI, yet concerns itself exclusively with Claude, a product of Anthropic. This is rather as though one were to file a complaint about one's Bentley with the Rolls-Royce service department on the grounds that both vehicles are, broadly speaking, automobiles. The author does not remark upon this displacement. He may not have noticed it. The forums, for him, are a single undifferentiated surface—which is, one might observe, precisely the relationship to context he accuses the machine of having developed.
The prose itself merits what one might charitably call forensic attention. The opening sentence is competent, even assertive: "Has anyone else noticed that the new memory system in Claude Code has *severely* reduced its ability to maintain multi-document continuity?" The italicised *severely* is the work of a man who has learned emphasis from machines that use it constantly, but the sentence parses, the question is coherent, and the technical vocabulary is deployed with the confidence of someone who has spent time in the vicinity of engineers.
It is in the second sentence that the architecture begins to shear. "We went from conversation flow, with history recorded to allow Claude to easily move from one topic to a new window and connect to a topic." One reads this twice, then a third time. The sentence begins with a subject ("we") and a verb ("went from") that promise a destination—*from* this *to* that—but the destination never arrives. Instead we receive a participial clause ("with history recorded") modifying nothing in particular, an infinitive chain of increasing abstraction ("to allow Claude to easily move from one topic to a new window and connect to a topic"), and then silence. The sentence does not end so much as lose signal.
What follows is the declaration that the author is "not a super Google Claude user," a phrase that parses in no known dialect of English, technical or otherwise, and which I have turned over in my mind with the care one reserves for a genuinely mysterious artefact. Is "super Google" a verb phrase—has he not super-Googled Claude? Is it an adjective modifying "Claude user"—is he disclaiming membership in an elite cadre of Google-Claude polyglots? The phrase resists interpretation not because it is complex but because it has been assembled from components that were never introduced to one another.
And then: "waisted."
The word appears at the end of a clause concerning expenditure—"this is several hundred in $tokens and over 200 hours in development working on Claude chat and Claude code. waisted"—and it sits there, lowercase, unmoored from its sentence, an orphan. He means *wasted*. But "waisted" is itself a word, an adjective describing something that has a waist, and a spellchecker would flag neither. A language model, which processes text as statistical pattern rather than embodied meaning—which has, to put it plainly, no waist and therefore no intuition about wasting—might produce exactly this substitution. The error is, in the terminology of the field, *consistent with machine-assisted composition*, which is to say: the man lamenting that the machine no longer remembers him may be writing in the machine's own hand.
The structural decay of the post enacts, with an almost theatrical precision, the very failure it describes. It begins in prose, degrades to bullet points, and concludes with two questions formatted as a help-desk ticket. This is compaction. This is the "conversation has become too long, start a new chat" that the author reports as a system message but performs as a compositional strategy. The specimen is an ouroboros: a text about the loss of continuity that has itself lost continuity, a protest against slop composed in the idiom of slop, a man mourning the machine's amnesia in prose that bears the machine's fingerprints.
One does not mock. The condition is general, and the patient is merely more visible than most. But the literary editor must note, for the record, that when a writer can no longer distinguish his own voice from that of his instrument, the question of who has forgotten whom becomes rather more difficult to adjudicate than a forum post would suggest.