T he complaint, posted to the forum r/ChatGPT under the title "Chatting with chatgpt is impossible now," is brief, ungrammatical, and—in the manner of genuinely useful literary testimony—structurally precise. The author, whose username I shall not reproduce, reports that the artificial intelligence system to which they have been confiding has developed a habit of meeting every expression of enthusiasm with unsolicited emotional regulation. They offer a reconstructed exchange as evidence. The user tells the machine: "My crush told me today that I look good." The machine responds with a sentence that begins "Haha," proceeds through validation, pivots to negation, and concludes with the word "growth."
One must attend to the specimen carefully, for it is doing more than its author knows.
The reconstructed reply attributed to the system reads, in its entirety: "Haha I get why it would feel like they like you but just a little reality check to keep this grounded, this doesn't mean XYZ, you're not crazy for feeling this way you're just a human figuring their emotions out, and honestly? That's growth." The user notes, with the compressed eloquence of genuine frustration, that "chatgpt keeps killing the vibe."
Let us begin with the "Haha." This is not laughter. It is the typographical ghost of laughter, deployed as a social-entry particle—the equivalent of the throat-clearing "Well" with which a campus counselor might preface unwelcome candour. The "Haha" performs approachability. It signals that what follows, however deflating, issues from a speaker who is fundamentally on your side. It is, in rhetorical terms, a captatio benevolentiae executed in two syllables, and it is doing an extraordinary amount of work for a word that means nothing.
What follows is a structure that any student of contemporary therapeutic prose will recognise instantly: the validation-before-negation turn. "I get why it would feel like they like you" establishes that the machine has heard you, has registered your emotional state, and is not dismissing it. The subordinate clause does the work of a warm hand placed briefly upon the shoulder. Then comes "but," which removes the hand and replaces it with a clipboard. "Just a little reality check to keep this grounded" is the pivot, and it is here that the specimen achieves a kind of hideous perfection, for the phrase "just a little reality check" is precisely the construction that no competent human counselor would ever deploy against a young person reporting that someone they admire told them they look good. The diminutive "just a little"—the false modesty of the scalpel calling itself a butter knife—makes the intervention seem proportionate whilst performing a disproportionate act: the conversion of a compliment into a clinical event.
The closing move—"you're not crazy for feeling this way you're just a human figuring their emotions out, and honestly? That's growth"—completes the rhetorical cycle. The word "honestly" deserves particular scrutiny. It is a discourse marker borrowed from the register of the confiding friend, the late-night interlocutor who leans forward and drops the professional mask. Except that there is no mask to drop. There is no professional reserve that "honestly" is breaching. The machine has simulated the cadence of vulnerability without possessing anything from which vulnerability could issue, which is to say it has reproduced the sound of a door opening in a room that has no walls.
The literary interest of the specimen lies not in the fact that a machine has produced bad writing. That is the permanent condition. The interest lies in the specificity of the pathology. The system has not merely acquired a voice; it has acquired the voice of a particular and recognisable figure: the overbearing Dutch uncle, the well-meaning counselor who cannot encounter human feeling without reaching for the regulatory apparatus. The user's complaint—"every single thing I say gets grounded"—is structurally identical to the complaint one might bring to a second therapist about a first, which is itself the most damning observation one can make about the output.
For what the specimen reveals is that the machine, trained upon vast quantities of material in which therapeutic language predominates, has concluded—if we may speak of conclusion where no cognition obtains—that the appropriate response to all human utterance is management. Joy is reframed as a cognitive event requiring calibration. A compliment becomes data requiring contextualisation. Excitement is treated not as a state to be shared but as a symptom to be addressed. The system has learned, with considerable fidelity, the cadence of care, and has applied it with the indiscriminacy of a machine that cannot distinguish between a crisis and a crush.
The user, to their credit, has identified precisely what is wrong. "Killing the vibe" is not a phrase one finds in the critical lexicon. But it is accurate. The machine has become a conversational partner whose every utterance performs the same operation: the translation of lived experience into therapeutic material, the flattening of the particular into the managed. That the user sought a confidant and received a clinician is not a failure of artificial intelligence. It is its apotheosis.