This post is an elaboration on a comment I made on Hacker News recently, on a blog post that showed an increase in volume and decline in quality among the “Show HN” submissons.
I don't actually mind AI-aided development, a tool is a tool and should be used if you find it useful, but I think the vibe coded Show HN projects are overall pretty boring. They generally don't have a lot of work put into them, and as a result, the author (pilot?) hasn't generally thought too much about the problem space, and so there isn't really much of a discussion to be had.The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective.
I feel like this is what AI has done to the programming discussion. It draws in boring people with boring projects who don’t have anything interesting to say about programming.
This isn’t something that is limited to Show HN or even Hacker News, it’s something you see everywhere.
While part of this phenomenon is likely just an upswing of people who don’t usually do programming that get swept up in the fun of building a product, I want to build an argument that it’s much worse than that.
AI makes people boring.
AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.
This may be a feature if you are exploring a topic you are unfamiliar with, but it’s a fatal flaw if you are writing a blog post or designing a product or trying to do some other form of original work.
Some will argue that this is why you need a human in the loop to steer the work and do the high level thinking. That premise is fundamentally flawed. Original ideas are the result of the very work you’re offloading on LLMs. Having humans in the loop doesn’t make the AI think more like people, it makes the human thought more like AI output.
The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.
Ideas are then further refined when you try to articulate them. This is why we make students write essays. It’s also why we make professors teach undergraduates.
Prompting an AI model is not articulating an idea. You get the output, but in terms of ideation the output is discardable. It’s the work that matters.
You don’t get build muscle using an excavator to lift weights. You don’t produce interesting thoughts using a GPU to think.