Founded MMXXIV · Published When WarrantedEstablished By W.C. Ellsworth, Editor-in-ChiefCorrespondent Login


SLOPGATE

Published In The Public Interest · Whether The Public Is Interested Or Not

“The spacing between the G and A, and the descent of the A, have been noted. They will not be corrected. — Ed.”



Vol. I · No. IV · Late City EditionFriday, April 10, 2026Price: The Reader's Attention · Nothing More

Front Page · Page 1

Screenshot of reporting indicating that OpenAI CEO Sam Altman told U.S. officials in 2017 that China had initiated an AGI crash program comparable to the Manhattan Project, seeking billions in government funding; an intelligence official characterized the claim as salesmanship. Found on r/ChatGPT.

Specimen: Screenshot of reporting indicating that OpenAI CEO Sam Altman told U.S. officials in 2017 that China had initiated an AGI crash program comparable to the Manhattan Project, seeking billions in government funding; an intelligence official characterized the claim as salesmanship. Found on r/ChatGPT.

OpenAI Chief Cited Fictitious Chinese Program to Secure Federal Backing, Intelligence Officials Say

In 2017, years before his firm's machinery would flood the internet with the degraded material this paper was founded to catalog, Sam Altman told government officials that China had launched an "AGI Manhattan Project"—a claim dismissed by intelligence professionals as salesmanship.

By Cabot Alden Fenn / News Editor, Slopgate

Filed from WASHINGTON —

The architecture of the present crisis—in which the nation's information commons are daily degraded by machine-generated material of no discernible origin, purpose, or merit—did not arise from a laboratory accident. It was funded. And the funding, as is now a matter of public record, was secured in part through the invocation of a threat that did not exist.

In 2017, Samuel H. Altman, then president of the artificial intelligence concern OpenAI and not yet its chief executive, met with United States government officials and informed them that the People's Republic of China had initiated a crash program in artificial general intelligence comparable in scope and ambition to the Manhattan Project. The claim carried the obvious implication: that without commensurate federal investment in American artificial intelligence capability, the nation risked falling behind a strategic adversary in a domain that would determine the balance of power for the remainder of the century.

An intelligence official who reviewed the assertion reached a less dramatic conclusion. China had launched no such program. The claim, this official determined, "was just being used as a sales pitch."

The episode, which has resurfaced through reporting now circulating on the platform Reddit—on the subreddit devoted, with considerable irony, to OpenAI's own flagship product—illuminates not a single act of misrepresentation but a structural logic. It is a logic this newspaper encounters in a different form each week, though it is not often possible to trace the finished product back through the capital structure to the moment of origination. Here, the circuit is unusually complete.

The sequence is as follows. A promoter tells officials that a foreign power is building a machine of unprecedented capability. The officials, who cannot easily verify the claim and cannot afford to ignore it, direct attention and resources accordingly. The promoter's enterprise receives capital—billions of dollars, over the years that follow, from both government-adjacent channels and private markets inflamed by the same narrative of existential competition. The capital funds the construction of enormous computing systems. The computing systems produce output. The output floods every channel of public communication. The output is, in the judgment of this newspaper's reviewers, frequently without merit, frequently without attribution, and frequently indistinguishable from the work of a human author only in the sense that it resembles no human author who has ever had anything to say.

This is the closed loop. Invented urgency produces real capital. Real capital produces real machinery. Real machinery produces the material that arrives, unbidden, in every corner of the information commons—the material that has made a newspaper such as this one necessary.

It would be imprecise to call Mr. Altman's 2017 claim a lie, though the Reddit post from which this newspaper takes its specimen uses the word without hesitation. What can be said with confidence is that the claim was not supported by the intelligence community's own assessment, that it was made in a context where its obvious function was to secure resources, and that the resources were in fact secured. Whether Mr. Altman believed the claim at the time he made it is a question of interior state to which this newspaper does not have access. The external facts are sufficient.

What is remarkable is not that a technology executive exaggerated a competitive threat to attract investment. The history of American enterprise is substantially a history of exaggerated competitive threats attracting investment. What is remarkable is that the investment succeeded so thoroughly that the machinery it built now generates the very conditions—a polluted information environment, a collapse of provenance, a daily avalanche of synthetic production—that make the original exaggeration almost impossible to investigate. The tools built with the capital Mr. Altman sought are now themselves producing the fog through which the public must attempt to evaluate claims like Mr. Altman's.

That the documentation of this episode now circulates on a subreddit operated by the company's own user community is not, in the strict sense, irony. It is something more precise: the system producing, as output, the evidence of its own origins—and being unable, by design, to distinguish that evidence from any other specimen in the feed.

OpenAI did not respond to inquiries regarding Mr. Altman's 2017 representations to government officials. The company has previously stated that it regards artificial general intelligence as a matter of genuine national security concern, a position that is not necessarily incompatible with having overstated the case in order to secure funding, though the two facts sit uneasily beside each other.

This newspaper takes no position on whether artificial general intelligence constitutes a national security concern. This newspaper observes that the slop does.


← Return to Front Page