Evocative food for thought:
We can now hold conversations and write stories alongside AI, but for creative knowledge work there's nothing quite like standing around a huge sheet of paper or whiteboard and sketching/brainstorming with another human by producing a trail of diagrams, words, and illustrations all tangled up in arrows and circles. How can we get closer to whiteboarding with an AI?
Evocative phrases for my work:
- Information wants to be connected
- Documents that think for themselves
- The world as one big document
Just ran some basic analytics on YC Vibe Check from my Nginx server logs using some oak pipe
and UNIX pipeline magic. The TL;DR from the first week —
- Around 650 unique users
- 3-4 searches per person on average (2100 total searches)
- Top 3 users did 120 searches
- Top 25 users did 450 searches
- Some notable popular companies include Stripe, Replit, Opensea, Substack
- The most popular topics people searched for, starting with the most popular:
- logistics
- crypto
- crm
- carbon capture
- browser
- supply chain
- manufacturing
- health
- note taking
- API
As AI systems get more and more capable (and comparatively less understandable), there will be more and more leverage placed on the interfaces through which humans work with these capabilities. It seems like a question at least as important to study deeply as the AGI question.
A mathematician without the interface of modern notation is powerless. A human without the right interfaces to superhuman intelligence will be no better. Interfaces and notations form the vocabulary humans and machines use to stay mutually aligned. Interface design, then, is an AI alignment problem.
Chat isn't it. Prompt engineering isn't it. Algorithmic feeds are definitely not it. The answer will be more multimodal and lean on a balance of human abilities in language and other senses.
At the base level, the fundamental question in this space is: what is the right representation for thought? For experience? For questions? What are the right boundary objects through which both AI systems and humans will be able to speak of the same ideas? What are their rules of physics in software interfaces?
What happens if we drag-to-select a thought? Can we pinch-to-zoom on questions? Double-click on answers? Can I drag-and-drop an idea between me and you? In the physical world, humans annotate their language by an elaborate organic dance of gestures, tone, pace, and glances. How, then, do we shrug at a computer or get excited at a chatbot? How might computers give us knowing glances about ideas it's stumbled upon in our work?
Natural language is the Schelling Point of intelligence. To try to bypass language (e.g. Neuralink) may be a misguided mission, because it overestimates the extent to which language is a communication channel, and underestimates the extent to which it's a world model.
Yet another Oak-based CLI lifts off! This time, it's something that's probably useful to more than just me. It's called Rush, and it helps run a single command on many files according to a template string.
For example, I can batch-rename many image files in a short and readable one-liner:
rush mv *.jpeg '{{name}}.jpg'
The pitfall of conversational UIs
A lot of tasks involve keeping track of state throughout, and conversations are terrible interfaces in which to keep track of state.
Tasks that involve keeping track of state:
- Travel planning (what you've seen, which places/bookings you made)
- Project management (what have I done? what's on my plate?)
- Researching a topic (why do people keep all those tabs open if not to keep state?)
- Decision making (what choices do I have? which is better how?)
- Following instructions (what have I done? did I miss a step? how much is left?)
- Editing [podcasts, videos, papers]
- Understanding a complex system, like reading a map
Tasks that don't involve state, and are good for CUIs:
- Querying specific trivia (weather, calendar events, adding todos)
- Fire-and-forget tasks (Send X a message, play music)
- AI as conversational partner (e.g. brainstorming, but then you'd need to "keep state" in another place like meeting notes)
If the user has to keep track of state in a conversation they have to hold state in their working memory (hard for no reason) or keep asking the interlocutor (what was step one again?).
So what's the solution? I think interactive/itemized workspaces. Instead of saying "book me a flight" and then doing the dozen back-and-forths to decide on all the details, just present all the choices to the user and have the user drag and drop the right flight onto their calendar. Much faster, more intuitive, and there's a clear, obvious visual analogue of what's happening.
Another solution may be documents you can talk to. Instead of holding a conversation with a bot, you and the bot collaborate together to write a document and build up a record of the salient points and ideas. Think GitHub Copilot for everything.
Just referenced a conversation I had with an AI in a real conversation with a human for the first time... feels like an uncanny inflection point?