Linus's stream

The pitfall of conversational UIs

A lot of tasks involve keeping track of state throughout, and conversations are terrible interfaces in which to keep track of state.

Tasks that involve keeping track of state:

  • Travel planning (what you've seen, which places/bookings you made)
  • Project management (what have I done? what's on my plate?)
  • Researching a topic (why do people keep all those tabs open if not to keep state?)
  • Decision making (what choices do I have? which is better how?)
  • Following instructions (what have I done? did I miss a step? how much is left?)
  • Editing [podcasts, videos, papers]
  • Understanding a complex system, like reading a map

Tasks that don't involve state, and are good for CUIs:

  • Querying specific trivia (weather, calendar events, adding todos)
  • Fire-and-forget tasks (Send X a message, play music)
  • AI as conversational partner (e.g. brainstorming, but then you'd need to "keep state" in another place like meeting notes)

If the user has to keep track of state in a conversation they have to hold state in their working memory (hard for no reason) or keep asking the interlocutor (what was step one again?).

So what's the solution? I think interactive/itemized workspaces. Instead of saying "book me a flight" and then doing the dozen back-and-forths to decide on all the details, just present all the choices to the user and have the user drag and drop the right flight onto their calendar. Much faster, more intuitive, and there's a clear, obvious visual analogue of what's happening.

Another solution may be documents you can talk to. Instead of holding a conversation with a bot, you and the bot collaborate together to write a document and build up a record of the salient points and ideas. Think GitHub Copilot for everything.

Just referenced a conversation I had with an AI in a real conversation with a human for the first time... feels like an uncanny inflection point?

It's very satisfying and remarkable that Monocle has almost completely eliminated the "I saw this somewhere, but I can't remember where, and Google can't find it" problem from my life. Almost completely. Online eidetic memory for the web.

I've been working on a conversational bot that I can brainstorm with and ask for advice. I've been tinkering with it for a few days, but today, I finally had a conversation that I thought was helpful enough for me to feel that it was adding value to my working process. Here's a raw (except for fixing some of my typos) transcript of the conversation (minus the prompt). The conversation is generated using GPT-J-6B running under my Calamity deployment.

Here's an excerpt:

Me: What if we present browser history as a map to explore instead? how do we lay out a map?

AI: One option would be to put the first node of the map near the top of the map, and have a node for each event. And then the second map layer might be a map to explore the details for a particular event in the first map. And so on.

Me: How do you think this compares to a geographic map? Can we draw some analogies to this "online history map"?

AI: There's a good analogy here that is actually very similar to what a geographic map is. A geographic map is basically a map that is made by plotting points of latitude and longitude on a flat plane, so they have a two dimensional layout. You can then put different data on top of that map, like cities, roads, rivers, mountains, etc.

Me: How does that analogy extend to showing more abstract browsing histories on a map?

AI: I think you'd have to start with the idea of mapping history to points of latitude and longitude. Then the second layer of information could be a set of different things that go along with that point. You could map history to a collection of objects, such as locations, objects, and/or actions.

Me: I like that. It's like there's a pre-existing "map of the internet" with different topics in different neighborhoods, and browsing the web is like taking a stroll through those neighborhoods. Your paths then show up on the map as paths you took.

AI: That's a good analogy.

It's still... not perfect (I ask it how to be productive, and it tells me I can sit on the bed and watch TV). But baby steps!

I figured out a much more succinct way to communicate what I was trying to say in my last stream update about nonlinear reading:

Linear reading is a depth-first search through the knowledge in a text. Non-linear reading allows breadth-first searches through the same text, treating it as a more densely connected graph rather than a sequence.

Nonlinear reading

Free the written word from the tyranny of linearity!

One interesting consequence of nonlinear reading (using the heatmap prototype I'm building) is that the reward curve for reading becomes much smoother.

With traditional prose writing, you may need to read some substantial portion of a text to get "into the meat" and begin to reap any value from what you're reading, but with nonlinear reading, you start learning and picking up new information immediately, and the closer you read, the more you pick up, in an almost linear correspondence.

This makes me want to read almost everything I come across to at least some level of scrutiny, and makes me at least skim-read texts that I otherwise wouldn't even have clicked on, because even 3-5 seconds of reading can teach me something new or give me good information about whether a closer read is worth my time.

Writer's context, reader's context

I’ve been toying with two things for the last week: (1) a thing that uses semantic distance/similarity to build a short outline from prose, and (2) a thing that just regurgitates the original text but with a "heatmap" highlighting key/summary sentences. I've been using it to read almost everything I can use it on. My takeaways so far:

  • The outline is rarely what I want, by itself. Because I often want to see "where did this sentence come from?" and see the surrounding in-document context of sentences, and taking sentences out-of-context makes them hard to read except for a certain class of writing like news reports where each sentence is a pretty stand-alone reporting of facts.
  • The heatmap is distinctly easier to read, because my eyes can sort of scan for highlighted sentences and adjust in real time how much detail I want to take in about a certain section, using highlights as cues. If I read a highlighted sentence and I feel like I’m missing context it’s also trivial to rewind visually to the start of the paragraph.
  • I also made a thing so that I can click on a sentence in the heatmap and see sentences that are most close to it semantically, which usually ends up being like "supporting reasoning" or "what else the author said about this" which is surprisingly useful. But I have to scroll around to see all of them which is annoying. I'll keep iterating on the design of this bit.

My high level conclusion so far is that there are two kinds of "contexts" when you’re exploring any text or collection of texts, and both matter equally. There’s original context/situated context which comes from the original source, and semantic context/reader context which is all the things that sentence/note is related to in the reader’s personal universe of ideas.

Whether I’m reading or taking notes, I want to quickly get a gist of key ideas and see them placed in both kind of contexts. So I think the interface challenge (in my view) boils down to: Can we have a reading/writing interface or notation that allows easily seeing excerpts from a large collection of texts/ideas in both the original/writer’s context and the reader/semantic context?

Science fiction/short story ideas

  • A budding human settlement on Mars, quickly maturing through shipments of industrial/manufacturing/agricultural equipment from Earth, gets cut off from the last few critical shipments for a self-sustained future due to a world war on the home world. The Martian settlement must now figure out how to ensure a sustainable future for themselves by either inventing and building the lost equipment (like semiconductor fabricators, radio communications equipment, Earth-based fertilizers, and rocket fuel) themselves, or navigating the complex task of acquiring what they need from a world mired in conflict and fog of war.
  • Humans experience reality and consciousness in a continuous way (at least, it seems this way to me). If the current path of development for powerful AI continues on its path, future AI agents and artificially conscious beings may experience reality as a discrete sequence of computations/inferences instead. How might a story narratively explore the difference between these two kinds of conscious entities and how they experience time and life?

Very excited for a time when products built on large language/image models become something more than thin boring UI wrappers around raw inference APIs. Why are you showing end users dials for top-k and temperature? They don't care!

I found a startlingly simple and pretty efficient algorithm to approximate π today, on a web demo for Tabby. It was so interesting that I tried it myself. Here's the script:

n := 1000000
k := 1
x := 0
with std.loop(n) fn {
	x <- x + 1 / (k * k)
	k <- k + 2
}
fmt.printf('Almost π: {{0}}\t(at n = {{1}})', math.sqrt(x * 8), n)

Looks deceptively simple, right? Just a few additions and multiplications. For various values of n, here's the output:

Almost π: 2.8284271247461903  (at n = 1)
Almost π: 3.1096254579886478  (at n = 10)
Almost π: 3.1384079670670912  (at n = 100)
Almost π: 3.14127432760274    (at n = 1000)
Almost π: 3.1415608224399487  (at n = 10000)
Almost π: 3.141589470489344   (at n = 100000)
Almost π: 3.1415923352799697  (at n = 1000000)
Almost π: 3.1415926217577352  (at n = 10000000)

It looks like the formula is a variant of a Taylor series-like approximation of π, but written iteratively/imperatively rather than as a sum per se, and it comes out really clean. I like it.