When you talk about AI's negative impacts, people often counter with "But that problem has always existed" as an easy way out. While often true, it misses the point: something doesn't need to create a new problem to be harmful. Worsening already existing problems can be just as bad, if not more.
Consider how AI is affecting technology adoption. This article proposes that AI is stifling tech adoption because of how developers are likely to use technology that AI is good at, rather than finding the best tool for the job, even when there are superior choices available. While developers have always preferred technologies that have solid documentation and strong community support, as the author says, "AI's influence dramatically amplifies this factor in decision-making, often in ways that aren't immediately apparent and with undisclosed influence".
I like to think about it like this - In the pre-LLM era, if you relied on good documentation or strong community, you'd either explore them yourself and land on a decision, or ask someone you think is an expert. You'd consult difference sources - colleagues, online communities, industry leaders - each offering unique perspectives. And difference of opinions is good, in this case, because it makes you see the nuances and helps make the right decision for your context.
With AI, it's like everyone going to one expert.
If you consider the argument made by this brilliant series, The Bullshit Machines, that LLMs are bullshit machines, one could make the same argument that bullshit has existed as long as humans have. But that'd be like saying guns already exist, and people know guns are harmful, so it doesn't matter how many people possess guns.
One thing is clear: just as social media amplified the rate at which misinformation and extreme views are spread across the world, LLMs are poised to overwhelm the information space at an unprecedented speed and scale. It's not a matter of creating unknown or unforeseen problems(which there are plenty of), it's about amplifying existing problems until they become nearly impossible to address.
This creates a concerning feedback loop, what I call the "gloom and doom cycle".
The knowledge gap of AI encourages developers to use older/existing technologies, which in turn creates negatives incentives for people to come up with newer technologies in the first place, because AI makes it hard to build a critical mass around it, which in turn results in perpetuating and worsening the problem.
Similarly, human artifacts are seeded into the AI models, which are used to generate ever more AI artifacts, that will then leave the human artifacts out, to perpetuate the bullshit cycle.