Pulled in everything I could from Twitter and hackers.town archives. Rather than a page for every tweet / toot, I put each of those entries on an Activity Log section in journal notes. Some links are busted. Some posts are missing images in the archive. I’ll likely redo the whole structure. Thinking about putting the whole thing in Org files or something.
Because I forgot the “setting it up as an app” part that I could’ve just read off of brew info:
osascript -e 'tell application "Finder" to make alias file to posix file "/opt/homebrew/opt/emacs-plus@30/Emacs.app" at posix file "/Applications" with properties {name:"Emacs.app"}'
$PATH injected into Emacs.app/Contents/Info.plist
for emacsclient: brew services start d12frosted/emacs-plus/emacs-plus@30
Don’t forget about yank-media instead of a download / dragndrop library.
Follow now lists my current subscriptions. I cheated a bit. Pasted direct HTML export from Fraidyc.at into the Follow page.
Time constraints are a hassle. It works for now.
[2025-09-18 Thu 21:48] Found a Twitter archive
Now is not the time, but I found my Twitter archive from 2020 and I’m thinking about feeding all 1.3GB into my notes, maybe via sqlite_utils or something.
Obsidian graph showing 922 pages, with clusters forming as pages get grouped by common links
Been doing this iteration of a second brain via Obsidian on Random Geekery since 2025-08-15. The steady addition of topic pages makes connections clearer. Split things into more distinct sections rather than the categories I relied on in the plain Markdown sources. Routinely distracted by brilliant ideas, but coming back to the idea that I want consistent structure throughout before I go veering off into a new direction.
Thought I’d update Neighborhood to reflect my current subscriptions, but Fraidyc.at has updated their export format in the years since I last touched it. I’ll have to poke at that after work. Maybe process the export and put the list in my notes rather than do it via Hugo shortcode every time.
The efficiency gains are staggering. SpikingBrain was trained with roughly 150 billion tokens (sort of like “words”) of data, compared to the trillions normally required for comparable LLMs, which means the energy and cost savings are immense. Depending on the comparison, it’s only needs about 2% of the data to train. The model also runs not just on NVIDIA hardware but on other, less expensive platforms, potentially even down to CPUs.
Woke up with some terrible ideas. Maybe a static site generator in Odin. Definitely some processing that requires a template engine. Should I write my own template engine? Again? Probably not.
The “open web” that Google cares about is just sites that bring them ad revenue. I don’t care about those. This is interesting though:
Google representatives have repeatedly trotted out the claim that Google’s crawlers have seen a 45 percent increase in indexable content since 2023. […] We don’t know what kind of content is in this 45 percent, but given the timeframe cited, AI slop is a safe bet.
Went to sleep with just about enough to make it to payday. Woke up with a negative balance as a bill gets automatically applied. I miss jobs that paid enough to cover all the bills. Also, tired of looking up all my links for every mutual aid post.
Time for a Tip jar I can loudly point at in times like this—and quietly nod my head towards the rest of the time.