Linus's stream

A fun couple-day hack to take a break from working on a text editor — burds.vercel.app

West Coast Earthquake Twitter > East Coast Snowstorm Twitter.

It's a pseudoscientific myth that humans only use 3% of the brain, or whatever, but it's probably true that at any given moment we only think with about 3% of the things we know. Getting that up to somewhere near 90% — or even 50% — will probably have similarly powerful effects.

Earlier today, I spent quite some time building a good implementation of rich text paste in Ligature3/Notation, my current place for notes and written thoughts. Before, I could only realistically paste in a couple of paragraphs at a time. Now, I can select a whole section of a document or an entire blog post, if I wish, and paste it into a new note in my app, and each paragraph will become its own little block, effortlessly, with sub-sections and lists split out into their own sub-lists properly.

With this, I'm finding myself more compelled to "dump" information into my notes with the mass copy-paste being a sort of a surrogate "import" feature. It's slightly changed my relationship to this particular app from it being a pristinely manicured garden to a mixture of handwritten and copied notes.

In an ideal world, discovering new thoughts and ideas from your own notes is as addictive/engaging as discovering new videos on YouTube or TikTok, discovering new people on Twitter.

One way to measure the progress of web search technology is by looking at the set of knowledge the average person doesn't bother learning about until they need it. The better the commodity search engine, the less effort people will expend to "pre-learn" things before they really need to know it, because they can depend on the knowledge always being quickly accessible.

These days nobody bothers memorizing the population of a country or when the seasons start, nor friends' addresses or phone numbers. But I still find myself wanting to learn more abstract, long-form topics because they can't simply be looked up "just in time" ... yet.

My Mac keeps having increasingly frequent issues where the corespotlightd process starts consuming all the memory on the system, logging me out of iCloud and freezing up the entire machine to where I can't even reboot.

Since I don't want to debug a macOS built-in process, and I can't turn it off in settings, it seems like the only reasonable solution is to continually monitor the resident memory usage of the process and kill it if it starts consuming too much, so I wrote up a little bash script to do just that.

  1. We need to ensure there's only one copy of this script running on the system at any given time. So I use a lockfile in /tmp.
  2. We filter ps aux to get the PID of corespotlightd, and if it's running, get the resident set (real memory usage, more or less) size.
  3. If it's using more memory than $MAXRSS (1GB for now), kill -9 it, and repeat every 30 seconds.
#!/bin/bash

LOCKFILE=/tmp/limit_corespotlightd.lock
if [ -f "$LOCKFILE" ]; then
    exit
else
    touch "$LOCKFILE"
fi

MAXRSS=1000000 # 1GB

while true; do
    PID=$(ps aux | grep '/corespotlightd$' | awk '{ print $2 }')

    if [ -n "$PID" ]; then
        RSS=$(ps -"$PID" -o rss | grep '[0-9]')

        if [ "$RSS" -gt "$MAXRSS" ]; then
            echo 'corespotlightd is using' "$RSS" 'kB; killing pid' "$PID"
            kill -9 "$PID"
        fi
    fi

    sleep 30
done

People who were bullish on flowcharts in the 60's and people who are bullish on "no-code" tools in the 2010's seem mistaken in the same kind of way.

Pseudocode as a tool of thought.

Its unique properties include

  • it's a programming notation explicitly designed for thinking and communicating
  • a good balance between expressivity and non-ambiguity
  • for every abstraction used, either natural-language or programming notation can be used to balance expressivity vs. precision at a granular level

Posit — Self-driving as a skill requires natural language understanding as a sub-skill.

Natural language understanding is not just a skill in itself, but might also be a notation that adds to an intelligence's ability to abstract and generalize broadly about the world. This ability to abstract and generalize is key to many kinds of performed intelligence, and self-driving might be a complex enough activity such that it requires a level of generalization power that is a superset of natural-language understanding and reasoning.

A related question is, could a very, very smart but non-linguistic animal drive? I'm not so sure.