Reading through "A meta-layer for notes" by Julian Lehr, one idea that really stuck out to me was this passage about sticky notes and notes in context of their referent
Apart from helping you find important passages in a book later on, sticky note bookmarks also allow you to add additional context to the section you highlighted (e.g. why you bookmarked a particular section or thoughts you had about it).
You could write down notes like this in a separate notebook, but then you’d lose the connection to the source they are based on. What makes post-it notes so interesting is the spatial relationship between the notes and their respective context.
It’s this spatial relationship that also make post-it notes great reminders.
We really don't have anything equivalent to sticky notes' versatility in its ability to be placed in context of the ideas or objects it annotates. It almost seems like the way software is built these days makes most software impervious to this kind of rich annotation. I wonder how we can break this constraint?
"View source" for the full stack
While browsing Hacker News (as one does) today I came across this PHP snippet, as highlighted by this comment.
<?php
if (isset($_GET['source'])) {
highlight_file(__FILE__);
exit;
}
When placed in a PHP script, it lets you pass a ?source
query parameter in the URL to see the complete, syntax-highlighted PHP program that's currently running. See https://lucb1e.com/randomprojects/php/funnip.php?source for an example.
This strikes me as the simplest implementation of a "View source" for the backend that I've seen, and I wonder if this kind of direct exposure of the full-stack source of a web app could be possible even for more complex applications. I remember Vercel (back when they were called Zeit) used to expose the full source of open-source JavaScript applications under a special URI path (I think /_src/
, but I'm not certain).
Building as a reflection of beliefs
I started building side projects as a high school sophomore. Back then, I was mostly learning. Learning how to build, how to stay motivated, how to get other people to care about what I had to say. And then I started building for utility, building tools that I wanted to exist so I could use them in my own life, to improve my day to day.
This year, I found myself and my public work at a strange place, where I'm building for the same reasons, but also as a way of acting on and representing my beliefs about a different kind of relationship humans can have with software and computers than what most of us live with. Indeed, this is a lot of what we discuss on a recent podcast I did with the Muse team. As much as I'm building things like a personal search engine for me to use, I'm also building these things and talking about them to let others know that they can build these things too, and that we can rethink the power dynamics and relationships we have with our software tools and ecosystems.
I'm not the only one doing this, of course -- other similar "building as a representation of belief" projects include Rasmus Andersson's Playbit and Hundred Rabbits. But nonetheless, I think it's an interesting place to be for a person who makes things: not only to make them for the sake of the end products, but as a form of speech about the very act of building.
TBH I made this website so I could shitpost away from the prying eyes of Twitter, and it's turned into something even more formal than Twitter. So this update is to give me some space to shitpost directly to the Stream. Nowhere is safe my friends.
Podcast conversation about self made tools, side projects, and the relationships we have with our digital tools, with the Muse team - https://museapp.com/podcast/42-self-made-tools/
While watching Richard Feldman's talk on the Roc programming language, I came across an interesting bit of syntax sugar around chaining effects he calls "backpassing". Backpassing is sugar around passing a "callback" type function to another function. In Oak syntax, it lets us rewrite a program like
with fs.readFile('/etc/hosts') fn(file) {
std.println(file)
}
... into this
// Roc uses `<-`, but since `<-` means something
// in Oak, we use `:= await` here instead, where
// "await" is a new language keyword.
file := await fs.readFile('/etc/hosts')
std.println(file)
It's pure sugar -- it desugars at the parser stage, which is clever and different than other implementations of await
that depend on some Promise or Future type in the core language. Its advantage is that it relieves the indentation hell of passing callbacks at a pure syntax level. Its disadvantage is that there's no visible function literal in the program anymore, and I imagine compile errors it generates won't be pretty. This is a bit of invisible syntax magic, which is generally against my (and hence Oak's) philosophy of simplicity promoting understandability.
Nonetheless, I'll be mulling it over and lightly considering it for Oak.
As an extension to my note on media-native programming languages, I wanted to note: Oak's standard library contains built-in support for not only parsing and rendering JSON, but also Markdown content via the json
and md
libraries. Markdown support is built into the language tooling, and it means when I write Oak programs, I feel like Oak "speaks Markdown" natively as an information format. It means there's no additional development effort for Oak programs to really support Markdown -- supporting it is about as easy as supporting plain text, at least in rendering, processing, and parsing. As a result, most tools I'm building with Oak where Markdown support makes sense, like this stream site, support Markdown.
I think this demonstrates one successful case study of a language that built in support for unconventional data formats, and benefited in the kinds of software and tools built from it.
QR codes for software distribution
I love how lightweight and error-resistant QR codes are. You can stick them anywhere and display them nearly everywhere. What if we could pack small pieces of contextually relevant software, like to-do apps, chat clients, little maps, and restaurant reservation systems into big QR codes to distribute software? Instead of giving you a download link, I just give you a QR code to scan, and your phone loads the program directly from the code and starts executing.
I want to imagine a future where software isn't necessarily "installed" and "uninstalled", but ephemeral like web apps and situationally delivered to our devices. Little micro pieces of functionality, quickly loaded and unloaded, scooped up by our devices from little tags we can touch in the physical world.
It would be so cool to be able to share a game I made with a friend, by giving them a QR code they can scan whenever they want to play it. No install, no download. Just load it up from the code!
Everything new is also old -- this reminds me of loading up software from floppy disks! But I think it can be different. QR codes can be so much more versatile and ubiquitous, so much more error-resistant, infinitely cheaper, and if delivered on screens, forever changeable. It's also decentralized, in a way. If I give you a QR code, you don't need to rely on some app store being online to use that new program.
One way I can imagine this working is some lightweight "base" app or virtual machine installed on your phone, that can load and execute very compact bytecode from a QR code.
Media-native programming languages
Modern programming languages are very good at handling strings. Not only do they have built-in representations of strings in the common string
"type", they have built-in support (into the language or as standard libraries) for searching within strings, comparing them, slicing them, combining them, and various other useful operations. As a result, most software we use today all expect us to enter text data. They speak the language of "text".
By contrast, modern tools handle images and audio only reluctantly. Images and audio are the native I/O types of the human mind, if you will -- it's much higher-bandwidth, and much more closer to "the organs" even if they're farther from "the metal" of the computer.
What if we could build into programming languages the same capabilities for working with rich media, as we've done for strings? What if OCR and speech to text, seeking and searching for objects or strings within video, photos, and audio, were all as easy as photo.findAll(:car)
or audio.transcribe({ lang: 'en_us' })
, built into your compiler? I think it would usher in a whole new age of software tools that let us interact with them in richer, more organic ways. If reading text from an image was as easy as reading text out of a binary buffer, how many more tools would let us take pictures to capture information?
You might say, "this sounds like a huge amount of complexity, Linus! No sane PL would ever do this!" But we've done this for text, because the tradeoffs are worth it -- Go ships out of the box with rich support for full UTF-8 text. This wasn't always the case. C, for example, has no native string type -- C works with bytes and characters, in the same way that current programming languages work with pixels and audio file buffers.
I submit to you: it doesn't have to be this way! We can create a world where we can program with rich visual and sonic information with the same ease with which we work with text. That day can't come quick enough.