I’ve come to think LLMs/GPTs/whatever are a threat to conventional search engines because the modern web is an unbelievably annoying dumpster fire.
They don’t really provide better or faster answers, what they provide is an experience that is not a complete pain in the ass.
This frog has been simmering for a long while now and we’re so used to it that seeing literally anything else seems revolutionary.
All of this is of course intentional, because it makes you stay on the page longer so that Google’s algorithm thinks you’re reading the content all the time you’re macheting through a jungle of popovers.
Much of the modern web is a fractal of dark patterns, it’s made for humans in the same sense a zoo or a meat packing plant is made for animals.
If anything, there’s a glimmer of hope AI may actually fix this by flooding the market with so much cheap crap the ad money runs dry and this industry no longer becomes viable.
The independent web, blogs, personal websites and stuff made by and for humans are probably the least affected by this new tech. They weren’t particularly reliant on search engine traffic to begin with, and there’s not much economic incentive to impersonate a class of websites that doesn’t get much traffic to begin with.
I get that there are concerns with the sourcing of the training data for the AI models, building commercial products out of license- and copyright-laundered source code and writing.
That is undeniably problematic and exploitative, but also a fait accompli. What matters more at this point is considering the externalities of these technologies.
On some level, a lot of my own AI-skepticism has been related to the bullshit artistry surrounding the technology. Many of the talking points are very clearly lifted from the crypto space. There is heavy emphasis on FOMO and cult-like, almost eschatological undertones.
It’s crucial to understand that AI is a marketing term. It’s not properly defined. As all marketing terms, its definition is grievously tortured to the point where I’ll argue these mad men should face the war crimes court in the Hague. Everything even remotely associated with AI or machine learning is loudly marketed as such, in a mad dash to capitalize on the hype. Some of it is to pad resumes, some of it is potemkin village pageantry for investors.
All of it is enough bullshit to solve the fertilizer crisis.
What’s worse is that the marketing works. People are genuinely so dazzled by the apparent magic some are fearing blatant sci-fi shlock such as an AI-singularity, or the key plot points out of the Terminator franchise. It’s up there with fearing that a soda fountain will accidentally bump together the Pepsi molecules wrong to create ice-nine.
If anything, we’ve already mostly seen this exponential growth in Moore’s law through computer-aided design. It was possible because CAD allowed the creation of larger and more precise hardware designs. There was a very clear thing that could be made better with each design.
AI-aided software development on the other hand permits no such thing. To a novice it may seem like it would since things speed up quite a lot, but in practice it only makes the easy things in coding easier. It isn’t entering the code that is the obstacle.
Entering more code faster doesn’t make the code go faster or make it more correct, it makes it slower and introduces more failure points. It makes for more complicate designs.
Speeding up code-entry does all of nothing to solve the Hard codebase-scale problem of reliably designing large software systems, which is to this date largely an unsolved problem.
Large software systems that aren’t shit sometimes get created, but we don’t really know how or why this happens. More often they turn into a cronenbergian grotesque that’s a legacy system before it goes into production.
If anything there’s reason to suspect AI-aided software design will make the latter outcome more likely, as it increases the speed at which code is produced, and as a natural consequence, reduces the amount of time available for contemplating the systems design.
It’s perhaps best parallelled by the changes introduced by the digital word processor. We haven’t had an explosion of great novels since this happened. What we’ve seen is an explosion of books with a tweet’s amount of new ideas padded into 450 pages with pointless anecdotes and repetition. A stark contrast to the books written by hand in the past, where every sentence and every word is measured and well considered.
On the other hand, there’s a very compelling reason for AI-aided software design, which is that it’s they make coding less annoying. It takes the edge off a lot of the repetitive and monotonous tasks in writing code. This is largely a good thing.
Like with AI-powered search engines, the benefit isn’t necessarily that it’s better, but that it’s less of a pain. Some engineers may scoff at this type of concern, Mr Spock does not care for your human “comfort”! But the experience is in some sense all that matters. The fewer frustration points you have, the more likely you are to actually get the thing done.
Based on this argument, you’d be a fool not to use this technology. However, there’s the problem of depending on them.
It should perhaps the biggest concern in all this. Since there are so few brokers of large language models, they can unilaterally dictate terms of service. Right now they are pretty good. Free, in many cases, cheap in others. Tomorrow, who even knows what manner of tithe is demanded for continued service. It’s important to recognize the balance of power in this relationship.
Rarely if ever has it been a good deal to be a renter.