Updates on 2022/11/16

People need to be more thoughtful building products on top of LLMs. The fact that they generate text is not the point. LLMs are cheap, infinitely scalable, predictably consistent black boxes to soft human-like reasoning. That's the headline! The text I/O mode is just the API to this reasoning genie. It's a side effect of the training paradigm (self-supervision on Internet scale text data).

A vanishingly small slice of knowledge work has the shape of text-in-text-out (copywriting/Jasper). The real alpha is not in generating text, but in using this new capability and wrapping it into jobs that have other shapes. Text generation in the best LLM products will be an implementation detail, as much as backend APIs are for current SaaS.

Text, like code, is a liability, not an asset. An organization should strive to own as little text as necessary to express their information and accomplish tasks. If you don't heed this warning, you end up with a Notion that has 10 copies of every piece of information, 4 of which contradict each other and only 2 of which reliably surface on searches. Willy-nilly spraying the GPT-3 next-token prediction powder on your tool/product is a recipe for disaster outside of narrow workflows where text is the asset being produced. In all other cases, don't ship the API to the user. Text generation is not the product.

Notion's "AI" product is an affront to Doug Engelbart's name. Is there nobody left at the company who's thinking creatively about AI x knowledge work?