It seems that.. from talking to a few people via email, basic feed discovery with rel alternates are ineffective. Simple redirects work better for organic discovery.
The plant disease identification/management handbook by Balaji Aglave is excellent for popular plants. A lot of modern handbooks are fluffy with information (maybe that’s popular) but this one gets straight to the point — I was very lucky to come across this book a while back.
One aspect of blogging that I don’t like is the unpredictability of an audience’s impressionability. Many people out there read/watch jokes/spam/falsities/uncertainties with utmost confidence — seen it countless times.
Presenting info in a way that prioritizes “critical thinking/reasoning” over an “oracle of truth” is actually hard. I’m almost out of my 20’s and seeing reactions to stuff online makes it seem like I’m still in high school — moreso now than ever before. The money gets made somehow :-)
This AI stuff is kinda exciting.. in a “watching danger from afar” kind of way. What kind of feedback loop does it have? Does it finish off the Internet content–wise? The data has to be huge and ultra fuzzy — so someone has to add semantics/structure right or is it automatic? The Internet is already gamified to an extent… but can it completely auto–generate videos? What kind of exploits will be used against the input? So many questions. It’s like the ultimate frankenstein pandora’s box thought experiment of the most bizarre outcomes :-)
I’ve got a few repositories on Codeberg and following their blog is pretty fun. The recent post on scaling tickles my risk–averse sensibilities. It’s relatively easy to make/stand–up anything but scaling is mostly uncharted territory. The scale at which the biggest companies operate essentially guarantees HUGE and unique interconnected systems that are mind–bogglingly convoluted and complex.
The threat of to search says more about search than it does about AI.
Bots are ≈ 80% noise. Kinda funny that in general analytics are not needed anymore. 80% noise and 20% signal. 80:20. It’s safe to turn off the computer and head outside — you’re not missing much when not on the Internet (it’s smaller than you’d think signal–wise).
I’ve since realized that Hugo’s architecture provides a variety of template optimization strategies. Hugo builds pages concurrently, so it might be hard to see on a modern device but before partialCaches or module mount trickery — there’s still the implicit complexity of the output/lookup model.
Generally the complexity cost of the default output formats are: page > term > taxonomy > section > home. Keeping expensive calls inside a section and/or a home template is usually optimal. and maybe memory should be the only problems with lots of pages.
The blog linked in a previous post is a gem. Too bad the current site doesn’t appear to have all the archived posts, you need strong search–fu to find them on archive.org.
Reverse pagination is a counter–intuitive strategy for attempting to make links immutable/cacheable and bookmark friendly across older pages. I searched for a visual explanation (difficult to explain concisely) and eventually arrived at an old article on paging . Reverse pagination has its gotchas, but then again pagination itself is one big gotcha.. :-) Well, it depends on the use case really.