Linus's stream

Zuck, in an internal memo about Messenger as a platform in 2013:

We're focused on quality because we think the issue is that the cost of using our platform is too high, when the bigger issue is that the value is too low compared to emerging alternatives. We do have real quality work to do, but we must also acknowledge that no amount of quality fixes will increase the potential value of using our platform,

It seems like knowledge tools are stuck in a similar delusion: we think the basic ideas (bidirectional links, tags, and search in an outliner) are right, but that the cost of using them is too high. So we polish and polish and polish, but really the marginal value they offer compared to pen and paper (or Apple Notes) may be simply too low. They replace problems of recall and speed with problems of information overload and tag fatigue. I don't think they enable fundamentally new ways of working with information. They mostly just feel like ever more beautiful facades in front of a growing mountain of complexity.

Computers are more than fast typewriters, because computers can understand language as more than a sequence of bytes. Computers should enable people to think completely new thoughts and solve previously intractable problems. Simply being more efficient is not enough — we need software that enables creation not possible without it.

A good text-to-image prompt:

fine-detail vibrant New Yorker style illustration of an inventor in his cyberpunk lab surrounded by his tools, ideas, notes, and robots

Rapid prototyping should feel like a fever dream.

Conversational interfaces are AI-human skeuomorphism.

Evocative food for thought:

We can now hold conversations and write stories alongside AI, but for creative knowledge work there's nothing quite like standing around a huge sheet of paper or whiteboard and sketching/brainstorming with another human by producing a trail of diagrams, words, and illustrations all tangled up in arrows and circles. How can we get closer to whiteboarding with an AI?

Evocative phrases for my work:

  • Information wants to be connected
  • Documents that think for themselves
  • The world as one big document

Just ran some basic analytics on YC Vibe Check from my Nginx server logs using some oak pipe and UNIX pipeline magic. The TL;DR from the first week —

  • Around 650 unique users
  • 3-4 searches per person on average (2100 total searches)
    • Top 3 users did 120 searches
    • Top 25 users did 450 searches
  • Some notable popular companies include Stripe, Replit, Opensea, Substack
  • The most popular topics people searched for, starting with the most popular:
    1. logistics
    2. crypto
    3. crm
    4. carbon capture
    5. browser
    6. supply chain
    7. manufacturing
    8. health
    9. note taking
    10. API

As AI systems get more and more capable (and comparatively less understandable), there will be more and more leverage placed on the interfaces through which humans work with these capabilities. It seems like a question at least as important to study deeply as the AGI question.

A mathematician without the interface of modern notation is powerless. A human without the right interfaces to superhuman intelligence will be no better. Interfaces and notations form the vocabulary humans and machines use to stay mutually aligned. Interface design, then, is an AI alignment problem.

Chat isn't it. Prompt engineering isn't it. Algorithmic feeds are definitely not it. The answer will be more multimodal and lean on a balance of human abilities in language and other senses.

At the base level, the fundamental question in this space is: what is the right representation for thought? For experience? For questions? What are the right boundary objects through which both AI systems and humans will be able to speak of the same ideas? What are their rules of physics in software interfaces?

What happens if we drag-to-select a thought? Can we pinch-to-zoom on questions? Double-click on answers? Can I drag-and-drop an idea between me and you? In the physical world, humans annotate their language by an elaborate organic dance of gestures, tone, pace, and glances. How, then, do we shrug at a computer or get excited at a chatbot? How might computers give us knowing glances about ideas it's stumbled upon in our work?

Natural language is the Schelling Point of intelligence. To try to bypass language (e.g. Neuralink) may be a misguided mission, because it overestimates the extent to which language is a communication channel, and underestimates the extent to which it's a world model.

I need to be more ambitious. There is no point taking risks if the stakes are small.