The document surfaced on March 14 in r/ChatGPT, a Reddit forum dedicated to the discussion of large language model applications, and it is worth reading in full, which takes approximately ninety seconds. Its author, whose username need not concern us, has built what amounts to a small correspondence factory—Google Sheets for the variable layer, a Python script for batch generation, and a browser extension called Linked Helper for delivery with randomized delays—and has posted the operational manual on the open internet because he believes others are doing it badly and would benefit from his method.
The method is as follows. One assembles behavioral signals about a prospect: recent job change, funding round, hiring patterns, and technology stack inferred from job postings. These are slotted into named placeholders within a prompt template. A large language model renders each data point into a sentence. The sentence is constrained by hard rules—sixty words maximum, the opener must reference a specific trigger event and nothing else—because, as the author observes with technical accuracy, soft guidance like "write in a conversational tone" degrades as context grows. The model drifts. Hard rules hold. The resulting message is delivered to the prospect's LinkedIn inbox at intervals sufficiently irregular to suggest the organic rhythms of a person who has other things to do.
What the prospect receives, then, is a sixty-word message that opens by referencing something real that happened to him recently, proceeds through a sentence or two of contextual relevance, and closes with a question designed to elicit response. It will not read as though it were written by a machine. It will read as though it were written by a person who noticed him. This is the product. The author is selling, at scale, the experience of having been noticed.
The economics are straightforward. A competent salesperson writing genuine outreach can produce perhaps forty to sixty personalized messages per day before quality deteriorates. The pipeline described here removes the principal bottleneck, which is the act of thinking about another person. The enrichment layer—the behavioral signals, the job postings, and the funding data—substitutes for the attention a human correspondent would have paid. The generation layer substitutes for the composition that attention would have produced. The randomized delivery delays substitute for the time that composition would have consumed. At every stage, the architecture replaces a human capacity with a mechanical equivalent calibrated to be indistinguishable from the original.
The author's sole complaint is inventory management. "Even tight prompts start producing kind of subtle repetitions across thousands of outputs." The defect is not that any individual message fails to persuade. The defect is that message number three thousand bears a family resemblance to message number four. This is the failure mode of manufactured intimacy at industrial volume: not implausibility but homogeneity. The messages are convincing in isolation and suspicious only in aggregate, which is to say, suspicious only to someone who would never see the aggregate—the operator, not the recipient.
It is worth pausing on the document's own qualities. The author writes with the unguarded fluency of someone explaining a process he finds genuinely interesting. There are typos ("imroves"), unfinished thoughts, and a collegial closing solicitation for peer review. The post is, by any reasonable measure, more human than anything his pipeline produces. It contains the irregularities, the enthusiasms, and the minor mechanical failures that his system is specifically engineered to simulate. He has written, by hand, a warmer and more persuasive appeal to strangers than the thousands of machine-assembled appeals to strangers that the document describes. He does not appear to have noticed this. He is not in the business of writing to people. He is in the business of writing to people at scale, which is a different business.
The broader market is considerable. LinkedIn reports more than one billion members. The platform's messaging infrastructure was designed on the assumption that correspondence represents attention—that a message from a stranger who references your recent Series A means that a stranger was, however briefly, thinking about your recent Series A. The assumption held for as long as composition remained expensive. It does not hold when composition costs approximately four-tenths of a cent per message and the enrichment data is available through public APIs.
What is being arbitraged here is not information but the inference of regard. The recipient's mental model—someone saw me, someone thought about this, someone sat down and wrote—is the product, and the product is now available at commodity pricing. The author has not produced slop. He has produced a detailed, technically sound, and entirely public blueprint for the systematic manufacture of synthetic attention, and he has done so in the tone of a man posting a particularly good recipe.
He is not embarrassed. He is optimizing.