I find this implementation of a dot product in Oak so satisfyingly elegant: sum of elementwise products.
fn dot(xs, ys) sum(zip(xs, ys, fn(x, y) x * y)...)
Standard library on oaklang.org
Syntax-highlighted source code for Oak's entire standard library is now available on oaklang.org for anyone's easy browsing. I think it turned out quite well!
I wrote a little pair of scripts today to download and archive my "Saved" media on Instagram. I first reached for an official API to do this, but it turns out there aren't any (at least, that I could find in a few minutes). So I decided to just scrape via internal APIs. The full scripts are here on GitHub Gist, though they may stop working at any time, obviously.
My final system ended up being in two parts:
- a "frontend" JavaScript snippet that runs in the browser console on the instagram.com domain, using the browser's stored credentials, to ping Instagram's internal APIs and generate a list of all the image URLs
- a "backend" Oak snippet that runs on my computer, locally, and downloads each image from the list of URLs to a unique filename.
Some interesting notes:
- They don't have a rate limit on their internal API (or it's very high, such that my nonstop sequential requests for many minutes never hit it).
- They have an extra layer of request authentication beyond cookies and CSRF, headers like
x-ig-www-claim
(an HMAC digest?) andx-asbd-id
. They don't seem like message signatures because I could vary the message without changing these IDs. - Their primary GraphQL API is quite nice. Queries are referenced using build-time generated hashes and responses support easy cursor-based pagination.
- Their internal API for media (by carousel, resolution, codec, etc.), just like Reddit's, is kind of a mess, with field names like
image_versions2
. I'm guessing lots of API churn?
After adding some remedial tests for poorly tested parts of the syntax
stdlib, Oak now has 1000 behavior tests written in Oak itself and tested on both native and web runtimes, covering the language itself and all of the standard library's API surface! Feels like a milestone to celebrate.
A particularly satisfying patch to Oak today, ec3a188a. It simplifies implementations of the standard library's str.startsWith?
and str.endsWith?
.
Before, these functions compared both strings byte-by-byte, short-circuiting the loop if any mismatch was found. This was theoretically more efficient than comparing an entire substring to the given substring, because of the short-circuiting possibility. But in practice, the overhead of the extra VM ops when evaluating such iteration negated any gains.
Now, these functions create substrings of the original string that should equal the given prefix or suffix, and do a single, simple string comparison delegated to the underlying runtime. As a bonus, these one-line implementations are very simple and easy on the eyes.
fn startsWith?(s, prefix) s |> slice(0, len(prefix)) = prefix
fn endsWith?(s, suffix) s |> slice(len(s) - len(suffix)) = suffix
Especially on long inputs, the efficiency gain is significant:
Benchmark 1: oak input.oak (old implementation)
Time (mean ± σ): 3.197 s ± 0.010 s [User: 3.792 s, System: 0.200 s]
Range (min … max): 3.179 s … 3.214 s 10 runs
Benchmark 2: ./oak input.oak (new implementation)
Time (mean ± σ): 2.141 s ± 0.024 s [User: 2.539 s, System: 0.144 s]
Range (min … max): 2.117 s … 2.187 s 10 runs
Summary
'./oak input.oak' ran
1.49 ± 0.02 times faster than 'oak input.oak'
A fun couple-day hack to take a break from working on a text editor — burds.vercel.app
It's a pseudoscientific myth that humans only use 3% of the brain, or whatever, but it's probably true that at any given moment we only think with about 3% of the things we know. Getting that up to somewhere near 90% — or even 50% — will probably have similarly powerful effects.
Earlier today, I spent quite some time building a good implementation of rich text paste in Ligature3/Notation, my current place for notes and written thoughts. Before, I could only realistically paste in a couple of paragraphs at a time. Now, I can select a whole section of a document or an entire blog post, if I wish, and paste it into a new note in my app, and each paragraph will become its own little block, effortlessly, with sub-sections and lists split out into their own sub-lists properly.
With this, I'm finding myself more compelled to "dump" information into my notes with the mass copy-paste being a sort of a surrogate "import" feature. It's slightly changed my relationship to this particular app from it being a pristinely manicured garden to a mixture of handwritten and copied notes.