Linus's stream

An insight about recommender systems from someone whose name I regretfully can't recall at the moment:

Many recommender systems/algorithms model interest as a precise static region in representation space, such that the algorithm becomes about zooming in forever and ever to higher-resolution patches of this space to find exactly what the user wants. In reality, interests shift, and the recommender algorithm may influence the user's shifting interests, in addition to being informed by them. So it makes more sense to model recommendation algorithms as a thing that traverses a linked list or a branching tree evolving over time, more than a "zooming in forever" into some perfectly interest-aligned patch of the topic space.

Otherworlds

I noted once that the most interesting potential for virtual/mixed reality wasn't to put yourself in a virtual office or the ocean floor; it was that you could experience entirely different worlds with different physics, where time flows differently, where acoustics mutate as sound waves fly through the air. In VR, you could move through scales of experience, from nanometers to miles, as easily as you move a few feet through space in the real world.

I feel a similar sense of loss of underexploration about large generative models for images and text. We can use these models to render anything at all, tell any story at all, invent any language, create any soundscape ... and we use these dream machines mostly to render simulacra of reality with the details swapped around.

Endowed with the magic to immerse ourselves in worlds of our own making and languages of our own creation, we are so eager to rebuild worlds that already constrain us, speaking languages just as familiar as our own. We are given the power to imagine anything, and we imagine the here and now. Why?

All around us, there are other worlds blooming, if we only looked a bit closer.

There is a waterfall of mini blog posts coming soon. Please stand by.

The Browser Company's recruiting capabilities are getting too OP.

In the year 2023 may we all live truer to ourselves. HNY!

Ink & Switch and Anthropic produce far and away the best-written and best-produced research reports in computer science-related fields (as far as I can tell), and everyone else should strive to reach for the same level of presentation, accessibility, clarity, and depth.

Underrated fact about training in the very large regime: you don't have to worry about overfitting/early stopping because single-epoch training is the default, and it turns out it's No Big Deal at all if you do single-digit number of epochs on these huge AF overparameterized models!

Academic benchmark datasets that are in the order of tens of thousands of samples are annoying in this way.

Interface design for generative AI systems: It's like we finally performed alchemy, but haven't invented chemistry yet.

How OpenAI bets, per Reid Hoffman interviews Sam Altman:

Invest in and capitalize on the next thing when it's right in front of you (and you can be confident in scaling laws), and with the rest of your efforts, execute novelty search to find the next seed of the big thing.

The best thing about this website is that nobody can dunk on these posts like on the bird app.