2025-09-16
[2025-09-16 Tue 05:35]
Spent my weekend puttering with some Logseq ideas. Meanwhile…
SpikingBrain Technical Report: Spiking Brain-inspired Large Models via Curtis Poe, who says:
The efficiency gains are staggering. SpikingBrain was trained with roughly 150 billion tokens (sort of like “words”) of data, compared to the trillions normally required for comparable LLMs, which means the energy and cost savings are immense. Depending on the comparison, it’s only needs about 2% of the data to train. The model also runs not just on NVIDIA hardware but on other, less expensive platforms, potentially even down to CPUs.
Among my frustrations with current AI is the heavy reliance on subscription services, which themselves require catastrophically expensive hardware. If SpikingBrain shows similar results for others, this could resolve that.
Bookmarks
Backlinks
I updated this page on