THE specimen before us is not itself machine-generated, which makes it more useful than if it were. It is a field report from the demand side of the automated correspondence economy, posted to the r/ChatGPT forum on Reddit by a practitioner who has, with admirable if accidental precision, documented a rate of decay that ought to interest anyone tracking the commercial viability of generative text in professional networking.
The facts, as the author presents them, are these: Three months ago, he began using GPT to compose LinkedIn connection requests and introductory messages. For two weeks, the messages produced responses. Then, using what he describes as the same templates and targeting, they did not. The silence has persisted. He cannot determine whether LinkedIn's algorithm has begun filtering machine-produced text or whether human recipients have simply learned to recognize it. He asks for diagnosis in a forum composed almost entirely of people who use the same tool for the same purpose and have, presumably, encountered the same silence, which they are equally unable to explain.
What the author has measured, without intending to measure anything, is the effective lifespan of a novel channel for automated professional solicitation. Fourteen days. This figure will not surprise anyone who has watched the adoption curve of email marketing, robocalling, or search engine optimization, each of which followed an entirely predictable trajectory: anomalous efficacy produced by the recipient's unfamiliarity with the form, followed by rapid habituation, followed by active or passive filtering, followed by an arms race between sender and gatekeeper that drives the cost of each successful contact upward until it exceeds the value of the contact itself. The author has arrived at the fourth stage. He does not appear to know this.
His parenthetical observation is revealing. He notes that he continues to see "million dollar advice posts written by ai" in his LinkedIn feed, and he perceives an inequity in this—his own machine output is suppressed while others' persists. But the inequity is explicable without recourse to conspiracy. A post that appears in a feed and is scrolled past has achieved visibility at zero marginal cost to the platform; it fills space, generates the impression of activity, is the wallpaper of the professional network. A direct message that consumes a recipient's attention and produces no value is a cost center. LinkedIn has every commercial reason to suppress the latter and no commercial reason to suppress the former. The author has confused two entirely different products—ambient material and directed solicitation—because both were produced by the same tool.
The question he poses, whether the filtering is algorithmic or human, is in practice unanswerable and in theory irrelevant. If LinkedIn has deployed detection, the response rate drops. If recipients have learned the cadence—and the cadence is learnable; it carries the unmistakable fluency of a system that has been asked to sound professional without being told what profession—the response rate drops. If both have occurred simultaneously, as is likely, the response rate drops faster. In all three cases, he is sending messages that no one reads.
What makes the specimen valuable as an economic indicator is the author's proposed remedy. He does not consider writing his own messages. He does not consider whether the problem lies in the substitution of generated text for personal address in a medium whose entire commercial premise is the personal address. He asks, instead, whether anyone has "tested this properly with actual data," by which he means: has anyone found a way to make the machine text work again. He is not questioning the instrument. He is recalibrating it.
This is the demand-side logic of the automated correspondence market in miniature. The tool is not evaluated on whether it produces genuine professional connection—a metric that would require defining what genuine professional connection is, which would in turn require the author to know what he wants from the people he is writing to, beyond the fact of their response. The tool is evaluated on response rate. When the rate was positive, the tool worked. Now the rate is zero, and the tool is broken. The author would like someone to fix it.
No one in the forum will fix it, because no one there can see past the same instrument. They are a market of sellers attempting to diagnose a demand collapse by polling other sellers. The buyers are not in the room. The buyers are on LinkedIn, not responding.
The fourteen-day figure, if it holds across practitioners, suggests that the market for machine-produced professional solicitation on LinkedIn reached saturation twelve weeks ago. What remains is the long tail of users who have not yet noticed, and the shorter tail of users who have noticed and are asking each other why.
The author belongs to the second group. He has the data he wants. He is the data.