Linus's stream

Knowledge tooling is mostly reified notation, and everything else in service of that notation.

Spent a good half hour reading, reading over, and thinking about The Immortal by Jorge Borges. As with other works of Borges I've read, I'm taken by his eloquence and depth and the layers and the connections in the text. Absolutely one I'll be coming back to.

So much of the current blockchain scene seems like it harbors both the promises and the risks of the early Web. Decentralization, accessibility, transparency... The Web, like crypto, promised all this and more. And they provided it! But then the world caught up, and as much as the world has been transformed by the Web, the fundamental power structures of society are only marginally improved today from where they were pre-Web. I suspect in the long term, say in 25 years, crypto will be in a similar place -- technologically enabling of these high ideals, but practically limited by social structures and large players.

... which is not a bad thing! It's only natural that long-term progress in society occur in shorter cycles of optimism and realism, unbundling and bundling. But I think it's a good perspective to take as an observer of the next tech boom.

Today I learned that many (most?) ways of notating dates, especially those that inherit conventions from ISO 8601, use what's called a Proleptic Gregorian Calendar system, which just ignores the fact that different calendars got adopted at different times and projects the current Gregorian dates into the past infinitely. It's what Oak's datetime library does, for example, but it also seems to be common in RBDMS's and other languages.

Once in a while, I come across a bit of short, readable, well-documented and explained source code that's delightful to read and teaches me something new. Today, I found once such example, in Go's sync package, for sync.Once. Here's a snippet, though you should read the whole (very short) file:

func (o *Once) Do(f func()) {
	// Note: Here is an incorrect implementation of Do:
	//
	//	if atomic.CompareAndSwapUint32(&o.done, 0, 1) {
	//		f()
	//	}
	//
	// Do guarantees that when it returns, f has finished.
	// This implementation would not implement that guarantee:
	// given two simultaneous calls, the winner of the cas would
	// call f, and the second would return immediately, without
	// waiting for the first's call to f to complete.
	// This is why the slow path falls back to a mutex, and why
	// the atomic.StoreUint32 must be delayed until after f returns.

	if atomic.LoadUint32(&o.done) == 0 {
		// Outlined slow-path to allow inlining of the fast-path.
		o.doSlow(f)
	}
}

Quick lessons about the Go compiler's inlining rules, how to use atomic intrinsics, and all in clean, well-explained code. Lovely.

"Organizing your world's information" > "Organizing the world's information"

Whatever disrupts Google is going to offer not only a smarter way to find information, but a way to find your information, more contextually relevant, from your life, rather than from just the public sphere. Every search should be a personal search.

The two models of web browsers

It seems that there are two possible mental models for thinking about what a web browser is.

The first is where browsers are static places, and web pages come to them. It's a communication model of browsers. The second is where content and web pages/applications are static places in the metaverse of the Web, and browsers transport their users to these web page "places". This is a collaboration/co-habiting model of browsers.

I think the second is strictly superior. It allows browsers to be richer mediums that leverage humans' innate sense of place and space to help us navigate and collaborate more effectively, rather than forcing us to think of browsers as simple communication tools.

Over the weekend, I've once again found myself going down a rabbit hole of dynamic language compiler design. I was reminded of two approaches that grabbed my interest from a pedagogical perspective, because it simplifies and makes approachable the typically revered task of compiler development for a high level language.

First, A nanopass framework for commercial compiler development by Keep and Dyvbig presents a way to incrementally develop a compiler by dividing the task of compilation into very many small "nanopasses", each of which lowers one high level abstraction into a lower one.

Second, An Incremental Approach to Compiler Construction by Ghuloum slices up the task of compiler development in the other direction, by starting with a minimal compiler that lowers numeric constants to assembly, then in a couple dozen incremental steps adds small abstractions like functions and stacks, closures, and GC on top.

Both of these works have implementations in Scheme, the first in nanopass-framework-scheme and the second in namin/inc. I'm hoping to write a compiler for Klisp, which is a Scheme in many ways, by following the latter paper soon. Should be fun.

Finally got around to reading Chris Voss's Never Split the Difference to the end this weekend, after having started it early this year and then forgotten about it. It's a very tactical, pedagogical book, so it'll appeal to those who resonate with that style of learning. It reminds me a lot of Never Eat Alone, in all the good ways. It teaches you to think about the aspects of human relationships and communication that are normally in the shadows, and chief among them is the main conceit of the book: effective communication is mostly thoughtful, curious listening.

I've been thinking more recently about the see-through-ness of tools. Tools are see-through when the abstractions they provide don't unnecessarily cover up what's going on under the hood, i.e. when it's not too "magical" to be understood.

Often tools and abstractions (software libraries, algorithms) seem magical and easy to work with at first glance, but fall apart under its own complexity when you need to dig deeper into its inner workings to fix a bug or customize it to your specific usage. I think there are ways to design tools so that they're less susceptible to this kind of failure mode, though, where it's possible to peer beneath the abstraction layers provided by the tool without getting into a tangly mess you don't understand. I want to build more tools that empower users to peer inside and understand and modify, and fewer tools that operate only within the safety of its own walled garden.