The Small Website Discoverability Crisis

Posted: 2021-09-08

There are a lot of small websites on the Internet: Interesting websites, beautiful websites, unique websites.

Unfortunately they are incredibly hard to find. You cannot find them on Google or Reddit, and while you can stumble onto them with my search engine, it is not in a very directed fashion.

It is an unfortunate state of affairs. Even if you do not particularly care for becoming the next big thing, it’s still discouraging to put work into a website and get next to no traffic beyond the usual bots.

You get a dead-sea effect. Traffic is evaporating, and small websites are dying, which brings even fewer visitors. Rinse and repeat.

Blogs limp along through RSS and Atom, but relying on feeds shapes everything you write into a blog entry. It’s stifling, homogenizing. The blogosphere, what remains of it, is incredibly samey.

I feel there ought to be a solution to this, a better way of doing things that can help, and perhaps the Internet as a whole is an irredeemable mess that will never mend, but maybe we can (somehow) make it easier for those who are actually looking to find what they seek.

Maybe there are lessons that can be drawn from what works on Gemini, and what doesn’t work on HTTP, that can synthesize into a sketch for a solution.

Gemini seems to be discovering automatic link feeds (e.g. Antenna), and on gemini-scale it works pretty well. But I’m just going to state that automatic link feeds do not seem to work on HTTP any more. You end up with a flood of astroturfing, vapid click-bait and blogspam (i.e. reddit). Stemming the flood demands a ton of moderation and still results in dismal results.

As a whole, I think centralized and algorithmic approaches are extremely exposed to manipulation when applied on the internet.

Web rings are cute, but I think they are a bit too random to help. Likewise, curated link directories were a thing back when the Internet was in its infancy, but the task of maintaining such a directory is a full time job.

You could go for some sort of web-of-trust model to only allow trusted submitters access to an automatic link feed, but that practice is excluding and creates yet more walled gardens, which impairs the very discoverability I’m trying to help.

Instead, perhaps there is a much simpler solution.

Simple federated bookmarking

A proposal, dear reader: Create a list of bookmarks linking to websites you find interesting, and publish it for the world to see. You decide what constitutes “interesting”.

The model is as recursive as it is simple. There is nothing preventing a list of bookmarks from linking to another list of bookmarks.

The creation of a bookmark list is a surprisingly fun project, it has some of the appeal of scrapbooking; and the end-result is also appealing to browse through.

It’s a bit strange, almost nobody seems to be doing this. Looking through a sample of personal websites, very few of them has links to other personal websites. A hyperlink isn’t a marriage proposal. It is enough to find some redeeming quality in a website to link to it. It costs nothing, and helps bring traffic to pages that you yourself think deserve it.

If we actually want these small websites to flourish as a healthy community, we need to promote each other much more than we do. It is advertisement, yes, but in earnest. I like it when other people link to my stuff. What sort of hypocrite would I then be if I only ever linked to my own websites?

Leading by example, I set up my own list of bookmarks:

Replies and Comments