Founded MMXXIV · Published When WarrantedEstablished By W.C. Ellsworth, Editor-in-ChiefCorrespondent Login


SLOPGATE

Published In The Public Interest · Whether The Public Is Interested Or Not

“The spacing between the G and A, and the descent of the A, have been noted. They will not be corrected. — Ed.”



Vol. I · No. II · Late City EditionMonday, March 30, 2026Price: The Reader's Attention · Nothing More

Front Page · Page 1

Writer Stores Novel of Childhood Abuse Inside Machine That Then Judges Her Unfit to Read It

Author of autobiographical fiction on grooming discovers that the system holding her creative archive is also the system empowered to destroy it, and that the destruction carries no appeal at 3 a.m.

By Cabot Alden Fenn / News Editor, Slopgate

DECK: *Author of autobiographical fiction on grooming discovers that the system holding her creative archive is also the system empowered to destroy it, and that the destruction carries no appeal at 3 a.m.*

BYLINE: By Cabot Alden Fenn / News Editor, Slopgate

T he facts of the case are these. A woman—she does not give her name, and there is no reason she should—composed over the course of two years a novel drawn from her own experience of childhood sexual abuse. She wrote it, or co-wrote it, inside OpenAI's ChatGPT interface, using the system as collaborator, sounding board, and filing cabinet. She stored worldbuilding notes there. Character sketches. Plot discussions. Entire reference chats she consulted regularly. She kept no local copies. She entrusted everything to a single system operated by a single company under a single terms-of-service agreement she almost certainly did not read in full, because no one does, because they are not written to be read.

On the night in question—the post was filed to Reddit's r/ChatGPT forum in the early hours, between attempts at sleep—she received two emails from OpenAI informing her that her account had been deactivated for "Child Sexualization Activity." The designation is OpenAI's internal classification. It is applied by an automated moderation system. It carries the immediate penalty of total account suspension, which is to say, total archive loss. The woman's novel about surviving abuse had been classified as the abuse itself.

The structural facts deserve enumeration. First: the material in question was, by the author's account, non-explicit. The triggering passage referred to a relationship between a minor and an adult "not described in any detail whatsoever, just referred to as something that happened when the protagonist was seventeen." Second: the same material, and material of greater specificity, had been shared with the system "countless times" over the preceding years without incident. Third: the system had, on a prior occasion, recognized its own overcorrection. The model had acknowledged within the chat that the author's intent was not to sexualize, that the sensors had "erred toward overcorrection," and that the flagging was preemptive. The system diagnosed its own error, then committed it again, this time with consequences.

One must understand what did not happen. No human being read the woman's novel. No human being weighed her intent against the policy. No human being decided that a survivor's account of grooming constituted "Child Sexualization Activity." An automated system rendered that verdict and executed it instantly. The appeals process, such as it exists, was not available at three o'clock in the morning, which is when the woman discovered she had lost everything. She composed her post to Reddit instead, because Reddit was open and OpenAI's appeals desk was not.

The woman is not a fool, though she calls herself one—"in hindsight I realize I was probably stupid." The self-recrimination is misplaced. She made a decision millions of people make daily: she trusted a commercial service to store her work. The difference, which is the whole of the difference, is that the commercial service she chose was not a passive repository. It was an active interlocutor with editorial authority over the material it held. She had deposited her manuscript in a vault that also employed a censor, and the censor and the vault were the same apparatus, and the apparatus did not distinguish between a novel about abuse and the abuse it depicted. The category was "Child Sexualization Activity." The category does not contain a subordinate clause for "unless the author is a survivor writing literary fiction about her own life."

This is infrastructure reporting. The question is not whether the policy is correct or the novel good. The question is architectural. When a single system serves simultaneously as creative tool, archival storage, and material adjudicator, the failure mode is not mechanical but judicial—a judgment rendered without human review, executed without delay, imposed without distinction between production and offense. The locksmith is also the judge. The filing cabinet has opinions about what it contains.

The woman's post cuts off mid-sentence. She was drafting her appeal and soliciting advice. She was afraid, she wrote, "of blindly trying something for myself and accidentally making things worse somehow and losing all my data." The system that held her work had taught her, in a single night, not to trust her own actions within it. She was, at the hour of filing, still inside the machine's logic—seeking permission from strangers to petition the apparatus that had already rendered its verdict.

There is a word for the material she produced over those two years—the notes, the drafts, the worldbuilding, the conversations with a system that remembered her characters' names and her novel's themes and, on at least one occasion, told her it understood what she was trying to do. The word is not slop. The word is work. Whether it can be recovered is a question for OpenAI's trust and safety division, which will review the appeal during business hours, in its own time. The woman, meanwhile, is awake.


← Return to Front Page