Pseudocode as a tool of thought.
Its unique properties include
- it's a programming notation explicitly designed for thinking and communicating
- a good balance between expressivity and non-ambiguity
- for every abstraction used, either natural-language or programming notation can be used to balance expressivity vs. precision at a granular level
Posit — Self-driving as a skill requires natural language understanding as a sub-skill.
Natural language understanding is not just a skill in itself, but might also be a notation that adds to an intelligence's ability to abstract and generalize broadly about the world. This ability to abstract and generalize is key to many kinds of performed intelligence, and self-driving might be a complex enough activity such that it requires a level of generalization power that is a superset of natural-language understanding and reasoning.
A related question is, could a very, very smart but non-linguistic animal drive? I'm not so sure.
There's a subtle but firm distinction between augmenting human productivity and augmenting human intelligence. The first is an economic, capitalistic endeavor first (though productivity contributes indirectly to collective intelligence); the second is a more pure pursuit, I think.
Noteworthy features of Starlark
Starklark is Bazel's configuration language, and not designed to be general-purpose. Nontheless, there are some features that seem useful even for a dynamic G/P language.
- Single final assignment at the top level. Module-global functions, variables cannot be re-bound. This helps reading code, and simplifies tooling.
- Deterministic iteration order for dictionaries, and in general determinism (a program run twice always produces the same outcome, modulo things like time). Determinism seems like a generally desirable property, for things like testing/reproducible builds.
- No mutation of iterator during iteration. Mutating the iterator (like a list) during iteration panics the program, to avoid iterator invalidation errors.
- No [checked] exceptions. Panicking the program for any non-anticipated error might seem problematic, but "makes the language simpler and reduces the number of concepts." Exceptions also become API surface for the language, so not having it helps language evolution.
- Strings are not iterable. This avoids bugs from passing in a string instead of a length-1 list of strings to APIs expecting a list.
Starlark, a configuration language for the Bazel build tool, has a tree-walking interpreter implemented in pure Go at google/starlark-go.
It seems like the canonical implementation of Starlark is the Java implementation in the Bazel source tree, but the Go version is used "in production" in web playgrounds, debuggers, and so on.
starlark-go is notable because it's one of a vanishingly small number of production language implementations that are tree-walking interpreters rather than bytecode VMs. The implementation guide says:
The evaluator uses a simple recursive tree walk, returning a value or an error for each expression. We have experimented with just-in-time compilation of syntax trees to bytecode, but two limitations in the current Go compiler prevent this strategy from outperforming the tree-walking evaluator.
The details of why exactly that's the case is interesting, and documented further in the link, but seem inherent to Go's current compiler design and philosophy. It also supports Oak's current (tree-walking) evaluator design, which is nice.
Typing on a typewriter for a while and then going back to a shallow laptop keyboard is a trippy experience.
How happy is the blameless vestal’s lot!
The world forgetting, by the world forgot.
Eternal sunshine of the spotless mind!
Each pray’r accepted, and each wish resign’d.
— Alexander Pope
Now that I have much more flexibility over my work/life cadence working solo (and with a healthier sleep schedule), it seems like I'm settling into a pretty consistent schedule of:
- Early morning / before meetings on deep work, building, writing, thinking
- Meetings in the early afternoon
- After lunch / evening on reading, research, gathering raw materials and letting them simmer
and it feels good. It lets me "end the day" whenever I feel tired or ready to end the day, without having to rush to complete something, and it seems like my mind is clearest and most ready to work after I wake up when my mind is clear. I'm trying to stick with it.
Future (desktop) operating systems — a collection of inspirations
- Artifacts, which is interesting for its document-focused design, the way it implements transclusion ("Links with context"), and a horizontal arrangement of documents as a way to organize workflows.
- Mercury OS, which has a striking visual design and showcases what a workflow and information-focused design for productivity could look like.
- Desktop Neo, which has a nice implementation of horizontally scrolling panes that connect apps together into workflows.
- Alexander Obenauer's Itemized OS, which reimagines personal computers to be more focused on information and open metadata to allow apps to extend each other around the information that matters, rather than organize into walled gardens.
What would this look like in 500 years?
Whenever I'm thinking about the long future of some technology or problem, like the future of AI and the nature of computation, there's a thought-experiment format I like to use to think more openly. It goes like this:
What would X looks like 500/1000 years into the future of humanity?
In the case of that blog, the question was, "What would computing hardware for AI look like in year 2500?"
I think this kind of thinking helps avoid the status-quo bias that's so easy to fall into when we try to imagine the future projecting out from current technology.
- If we project out from the current computing landscape, it's easy to think that binary logic gates on silicon wafers is the right computing substrate for nearly everything we'd want to do. But would humanity still be using silicon wafers to run intelligent, conversational, omnipresent computers five centuries from now?
- It's easy to get sucked into the allure of gene editing in the next few decades and century, but what capabilities over biological processes will humanity have in a thousand years? Surely, we won't just be adding and splicing DNA. Surely, the long future of dominion over the biological source code is something much more expansive, perhaps the equivalent of "software engineering" for biological processes, perhaps a world where there are more artificial life forms in the universe than organically evolved ones.