There was a time when the internet was fed by people by lived experience, observation, research, thought. Today, that pipeline is changing. Content is no longer mostly made for the internet; increasingly, it’s made from the internet; scraped, summarized, reworded and reposted by machines trained on what already exists. The result is a slow inversion: the source that once nourished the system is now being recycled within it. We’re watching the supply of original knowledge erode, not from lack of demand, but from excess automation.
The poisoned data well
All machine learning models rely on training data. The cleaner, more diverse, and more human that data is, the more powerful the model. But when synthetic content becomes indistinguishable from human content, and begins to dominate search engines, recommendation feeds, and open web spaces, the training data itself becomes derivative. That means AI is being trained not on new knowledge, but on flattened versions of its own output. The implications are long-term and structural: as the proportion of machine-written content rises, the capacity for AI to “learn” anything useful drops.
Model cannibalism and content collapse
AI has entered a recursive phase, one where it increasingly learns not from the world, but from its own output. Each new model absorbs slightly more synthetic data than the last, creating a feedback loop that eats its own foundation. Researchers call it “model collapse”: The moment a system starts to mimic itself rather than reality.
The slop era
The internet is now flooded with what technologists have started calling “slop”: automated, low-effort content written to fill space, drive clicks, and manipulate algorithms. These texts look like articles, but they contain no argument, no voice, no lived context. They are built to be indexed, not read.
In this new economy of quantity, speed outvalues meaning. Users face a constant stream of summaries that summarize other summaries. The result is cognitive exhaustion, not because there’s too little information, but because there’s too much sameness. Trust erodes quietly when every sentence starts to sound interchangeable.
Trust under pressure
As synthetic content becomes pervasive, doubt becomes the default. The polished precision of AI writing, once a mark of intelligence, now signals suspicion. Human creators face a strange new burden: to sound authentic, they must embrace imperfection.
A recent study titled “As Good as a Coin Toss” found that people could distinguish AI-generated text from human-written content with only 51% accuracy, essentially a guess. When even trained readers can’t tell what’s real, credibility becomes fragile currency.
To survive in this environment, creators are reinventing what authenticity looks like. Visible imperfection — a typo, a pause, an uneven sentence — becomes a form of resistance. Voice memos, handwritten notes, raw photos, and unfiltered syntax are no longer aesthetic quirks; they’re proof of origin.