The complaint arrives on the forum r/ChatGPT with the unadorned simplicity of a man writing to the municipal water authority to report that his taps have begun dispensing the same water twice. "Does anyone have any good prompts or instructions," the poster begins, "to prevent GPT from using past images as seeds for newly generated ones within the same chat." The sentence lacks its terminal question mark—whether by oversight or by the quiet resignation of one who suspects the answer is no—and proceeds into an account of mechanical fixation that would be familiar to any reader of the later case studies, though in those volumes the fixation belonged to the patient rather than the instrument.
The difficulty, as the poster describes it, is this: once the image-generation apparatus has produced a single artefact, it treats that artefact not as a completed commission but as a thesis statement from which all subsequent productions must follow. The machine does not begin again. It elaborates. It refines. It circles. "Unless you tell it to generate a COMPLETELY separate image," the poster writes—the capitalisation here performing the very desperation that the prompt itself cannot convey to the system—"it always just takes the last one and tries to modify it which most of the time doesn't end up how I want it."
One ought to pause at the architectural fact embedded in this lament. The feature that permits the machine to sustain a coherent conversation across dozens of exchanges—what the engineers call the context window, that running transcript of everything said and generated within a single session—is precisely the feature that produces the pathology. The machine remembers because remembering is what it was built to do. Its memory is not selective; it does not distinguish between the remark one wishes to have recalled and the production one wishes to have forgotten. Every generation contaminates every subsequent generation not as a defect but as a consequence of the system's most celebrated capacity. The poster has discovered, in other words, that the tool's virtue and the tool's vice are the same mechanism, viewed from different angles of need.
This is, if one permits a literary analogy, the condition of the novelist who has written a successful first chapter and finds that every subsequent chapter insists on being a revision of the first. The problem is not one of talent or even of will but of a certain gravitational pull exerted by the existing material upon the imagination that produced it. Harold Bloom wrote extensively on the anxiety of influence as it operates between writers; what the poster has documented, with an economy that Bloom himself rarely achieved, is the anxiety of influence as it operates within a single authorial apparatus across the span of minutes. The machine is not influenced by its predecessors. It is influenced by itself. It has become its own Harold Bloom and its own ephebe simultaneously, caught in a dialectic from which no strong misreading can extract it, because it is incapable of misreading at all.
"I've tried various things and nothing has worked," the poster continues, "but maybe my instructions aren't good enough." Here the document reaches its most revealing passage, for the operator has located the failure not in the system but in his own rhetorical insufficiency. He believes—must believe, if he is to continue using the apparatus—that somewhere there exists a formulation of sufficient precision and authority to break the cycle. The correct prompt. The master instruction. The sentence so perfectly constructed that the machine will at last consent to amnesia. This is the faith of the humanist transferred wholesale to the engineering context: that language, properly deployed, can compel any interlocutor to attend, to understand, and to obey. It is touching. It is, in the current circumstances, almost certainly wrong.
For the difficulty is not linguistic but structural. The machine does not persist in its self-reference because it has misunderstood the operator's wishes. It persists because the architectural decision to maintain conversational continuity—a decision made not by the operator but by engineers at a considerable remove from this particular forum post—ensures that every prior output remains present in the system's working memory with the same weight and prominence as the operator's latest instruction. One cannot, by force of prose, persuade a filing cabinet to lose a document. One can only open a new drawer.
The poster's closing emoticon—":)"—deserves a word. It is the smiley face of a man who has been courteous to a mechanism that cannot register courtesy, who has exhausted his rhetorical resources against an interlocutor that does not know it is being addressed, and who nonetheless maintains the social contract of the forum: please, thank you, and the small upturned parenthesis of goodwill. It is the most human gesture in the document, which is to say it is the only one the machine will not reproduce in its next output, having no use for it.