weathering software winter
This is a blog post based on a transcript of a talk Devine gave at Handmade Seattle on November 26th 2022. Watch the video version(youtube/vimeo). The slideshow presentation was made using Adelie.
Thank you to Matt Mascarenhas for providing us with an auto-transcript, it would have taken us ages to put this text together without it, and thank you to Abner Coimbre for organizing this amazing event.
Weathering Software Winter
I work with a little studio called Hundred Rabbits, it is a small collective of two which operates from a small solar-powered sailboat. All of our devices are donated, discarded devices. Our philosophy is that to make fast software, you need slow computers, and we've tried to espouse this as much as we could. We spend our time sailing around, and doing experiments with resilience, that covers computers, but we also touch on food security, preservation, and on studying past technologies that could be used today in times of crisis.
We started sailing seven years ago, it took us all around the Pacific. We went through Mexico, French Polynesia, New Zealand, up through Fiji, Japan, and then followed the Russian Coast, brushing by Alaska before returning to Western Canada. Our thinking was that we could keep doing art, music, and video games as we sailed around, but it soon became obvious that all the technology that we took for granted, and that we had with us, was not designed to leave the Western world.
The moment we cast off from Mexico, all of our devices started breaking down. In his 2022 Handmade Seattle opening speech, Abner was making a point about the Roombas starting to fail. If your Roomba fails, you pick up a broom, but on a sailboat you depend on devices working to find your position, knowing the weather, etc. These things have a more direct impact on your survival, and all of that technology is built on the same stack as the Roomba.
Many of the tools that we thought we could rely on broke down, whether it is Apple products, or software that require subscription services, DRM, etc. As an artist you spend time developping a skill, you become a Photoshop illustrator. When your connection to the internet fails and that the software locks up, that skill that you thought was yours was actually entirely owned by someone, and can be taken away.
Even though we've been paying for this sort of software for years, the moment that you can't have access to authenticate yourself that skill is gone. We didn't expect this, it scared us.
Nowadays, everything is built on the cloud. While sailing along the US coast in early 2016, we'd stop by conferences to see all the happenings in tech. We would speak with people at the booths, and they would try and sell us their product, but we'd have to stop them mid-sentence to ask, "Does it work offline?" 99% of times people say, "ah, yeah no, sorry. It's on the cloud."
It seemed that no one was building things that we could use. This was a sad realization. I love programming, and for a time it seemed that it was utterly incompatible with our new way of life.
Before I go on any further, I will paint you a picture of what it looks like to try to use modern stacks when away from the shores of the Western world.
Imagine two people in a small sailboat in the tropics, somewhere like the Marquesas, or another island in the South Pacific Ocean. These islands are covered with beautiful lush forests, they require rain and sun, that's all, then there's us sharing that same space, busy lifting a smartphone in a Ziploc bag up the mast to try and get one bar of signal to update Xcode, which at the time was 11 gigs. We had a stack of cards, each worth two gigabytes of mobile data, but with Xcode you can't resume the download if it fails. We could swap the codes for the cards, and if we did it within 10 seconds it would detect a timeout and would continue. The problem is that if the download is not done by 1600, then the sun is setting, the solar panels aren't charging the batteries anymore, and our laptops are dying. The download is at 7 gigs with three more hours left to download the update, it won't finish, and we will have spent all that data for nothing.
This wasn't ideal. We tried to look for alternatives...
Note: In 2016, we were publishing games for iOS, we used Xcode, as well as iPhones, and Macbook Pros. Rek used Photoshop to draw, having never been exposed to tools like Krita, and Gimp. We knew of Linux back then, but not well enough to consider it. As artists, we didn't think Linux could work for us (evidently, we were wrong). We'd always lived in cities, close to an internet connection. Updating Xcode on a fast and stable home connection was never a problem, we never looked at the size of any download because we didn't have to care. On land, we had seemingly infinite power, and we had bandwidth. When resources are scarce, then we had to start focusing on details like download size, and how much sun the day will bring. We were so focused on sailing that first year, that we didn't stop to think if it made sense to do iOS development while traveling. When we left Canada we were a bit naive, but we didn't know what we didn't know. Now, in 2023, Rek draws mainly by hand, and processes drawings with Gimp. Both of us use linux(Manjaro), and we don't own a smart phone. See tools ecosystem for further details on our shaky beginnings.
We noticed that all the software that we had written in the past was gradually becoming unusable. We grew up in Montreal, and many of our friends worked in AAA studios like Ubisoft, making free-to-play games, building projects that had a lifespan of three years. The projects that we made in the past on the Apple stack, Electron, or on Unity3D, also had a lifespan of three to four years, but games like Super Mario, as well as others produced in that era, are still playable today. We are in a dark age, where game developers and artists spend years building games that quickly become lost bitrot. We'll never be able to play Scott Pilgrim on the PlayStation again.
Looking online, it seemed that others shared these concerns.
In the above image are four philosophies of digital data preservation, they all have their flaws. Data preservation is a somewhat new field, we don't know what sort of data we'll be able to recover in a hundred years, but from the experiments that we've looked at, it doesn't look good. The BBC had a project where they were trying to mimic one of their longest lasting books, the Domesday book. It was written by monks a few hundred years ago. The BBC's thinking was that we still have this book today, we can still read it, do we have technology to make something like this? Is it possible to record our way of life today, so that people in a thousand years can see how we lived? They created the BBC Domesday, a disc that contained music, movies, and scientific papers. Then 10 years later, it was an unreadable, people forgot how to decrypt and decompile the disc. It seemed that there was no real way to preserve data long-term, and so we decided to try our own little experiment.
As a disclaimer, all that I am writing now is very naive. I draw, and I make music, when I started doing research I didn't have the vocabulary to find what I was looking for. I didn't know what virtual machines were, I didn't know what compilers were either. I had a vague idea of what programming was. I had written Swift and Objective C, but had no conception of how it actually tied to processors. It seemed like I was learning a service, the same way I was learning "To Photoshop". It wasn't like learning a skill, you don't learn to draw when you use Photoshop, you learn how to operate within the confines of someone else's playground, and when that rug is pulled from underneath you, there's nothing you can say or do, and you never really understood how it worked in the first place so you can't really replicate it.
We eventually stumbled on the words virtual machine. With a Super Mario brother NES file, I knew I could put it on a computer and I could run it, I could run it on my phone, I could run it on an old computer, I could run it on a Super Nintendo, I could also run a Super Nintendo emulator running Mario on a Nintendo 64. This seemed a good way of preserving data. We decided to not give up entirely on software, and to see what could be done in that space using virtual machines.
Hardware is extremely cheap, and because it is it's covering the world with e-waste, and all of it is there for the taking. Everyone has drawers full of old devices, Super Nintendos, Playstations, Dreamcasts, all of which are considered obsolete, people stopped developing games for them, but we thought we could give a second life to such devices. When looking into how this could be done, it led us to interesting places. We became interested in seeing how we could repurpose old electronics. There didn't seem to be any competition. If you do like everyone else, you're competing with everyone else. For example, if making iOS 10 software the marketplace is completely saturated, but if you're going be release an Atari game today, it's going to be huge. There's no one giving purpose to these devices anymore. We tried to do something completely left field. To start, our goal was to try and make games for the NintendoDS.
So looking at a VM from the outside, without having a sort of academic background, the first thing you find is the JVM. I've never written any Java, but I would go around and tell my friends who are actual programmers, "I'm gonna make the JVM! It seems like this would solve our problem!" Their response was that the JVM was so fractalized, and that this wasn't what we were looking for. We found plenty of academic papers with no software written for them.
I dabbled into the Java ecosystem, and I couldn't make heads or tails of it
I knew that we wanted to make playful little projects that could run on a variety of devices, on a sort of cross platform that was not Electron. Instead of always targeting the new modern thing, we thought we could make a VM, or something other, that could run increasingly further back into the past.
There is an incredible amount of fast computers out there. The first time I made an NES game, I was surprised at how much stuff I could draw on the screen at 60 fps. I had somehow been convinced that modern technology was better, faster, and all the eight bit stuff had been solved problems, I thought that we had explored this problem space entirely.
It didn't seem to be the case. There were a lot of ideas in the past that were forgotten, and I made it my mission to explore this. It was COVID, everyone was confined to their homes, it was the perfect time to figure out how computers work. We're going to look into the past to see how people did it, and what ideas were forgotten.
There was a time when computers were super playful, but now they feel cold, and have been weaponized against people. The sort of playfulness that you'd find in Microsoft Bob, I can't really find a parallel for today. I can imagine what Microsoft Bob would look like if made today, it would try to sell you all sorts of shit. I don't think we, as a society, have a system in place that would foster the creation of something as playful as this.
A realization that I had working on something like this was that customizing your hardware, and your software, makes you care.
If you buy something off the shelf, like an iPhone, which you can't change, you're less likely to care for it, and it's going to end up in a drawer. Back then you could go to Radio Shack and get parts for your computer, it was possible to customize it (for example, the Altair), and then you could know it in its entirety. Devices that were customized, or built from near-scratch, are still loved, but as for your old iPhone6, you don't know where it is, and even if you did it's probably unusable. Devices like the iPhone6 can't be repaired, they are designed to fail in ways that are inscrutable.
Instead of trying to make things as broadly accessible as possible, we thought we'd try to see what we could do if we designed it for just one person, a sort of personal computing that was not designed to scale.
I can see the appeal of languages like Rust, I won't be learning it, but if you're convinced that you can use computers to solve problems, then obviously this will not be relevant to you. I think that if you are forced to use a computer, there is no way that it can be playful. There was a writer who said, "if you're forced to play, you can't play."
I don't have a smartphone. Walking in Seattle I saw the streets lined with lime scooters everywhere, to see the menu to be allowed to use a scooter you need to scan a QR code. There's a layer of reality that's being forced onto people to use technology.
This is not what I'm talking about here. What we're trying to do is something that is closer to your personal computer, something that would be designed and tailored to help you play.
When we began looking at what VMs there were, we looked into the past, and that brought us to Smalltalk. I read this book called What the Dormouse Said, and it talked about the history of Stanford, and how people had this utopian idea of what computers could be. This is way before I even became interested in computers, learning what their vision was then made me optimistic about what could be done.
That was when I was first exposed to the idea of a virtual machine, or byte code. As I mentioned before, looking up information online without the vocabulary you need to find things is really difficult. There was no way I could go around learning about personal computing without spending some time learning Smalltalk.
Obviously, the moment that you start going that down that route, you will have people telling you that you really have to try Lisp machines. I spent time learning about Symbolics and LVP. I learned about a sort of system where the whole thing could be inspected, a system that was trying to do computing in a very personal way.
Nowadays, it's akin to the browser, where you can go on a website that you like, and right click to inspect. It is less and less the case now, but for a while you could inspect websites to see how it was put together. This was empowering, emblematic of the era.
This doesn't work on a VM at all, but it's relevant because Niklaus Wirth, who wrote Oberon, wrote an entire operating system that came with a book.
The Oberon book is really nice, especially if you come into it with a clear mind, without prior knowledge or expectation of what programming is. The book explains how to build an operating system from scratch. It uses a language like Pascal, which is very easy to read, more beautiful than other ALGOL languages. One of the first things that it teaches you to do is how to draw pictures, which I thought was kind of interesting as someone who's very visually-inclined.
In the book Niklaus Wirth mentions the P-machine. I was beginning to form an idea of what virtual machines were, and he just briefly mentions it, "oh! By the way, I wrote this Pascal compiler targeting a virtual machine." I was like, "a language can target a virtual machine?! What?" And the reason why it was so easy for him to port the compiler between platforms, is that the opcodes running it were extremely simple, and very few in number.
I thought then, this is a way to do data preservation that is appealing to me.
I learned Pascal.
Pascal is a beautiful language. In the above image, I am running it in an emulator for the Macintosh. I loved that I could have a whole operating system running in a small window that wasn't Linux, that wasn't some big QEMU image. It was a little self-contained system, which I could destroy, or start over with really quickly. I could run it and do 3D at 60 fps by running the emulator at 128 times its speed, and thought that was cool, a testament to how fast computers are nowadays.
That led me to learn about the history of C.
I migrated all my platform to Plan 9. Plan 9 has its own C compiler, and it's not quite C89, it's its own thing, a no-concessions system.
The people who built this weren't planning to make money with it, you can really tell that it's a product of love. That made me optimistic about what personal computing could be.
Plan 9 didn't run on a VM, but it inspired another system. The lessons that they learned during Plan 9 led to the building of something called Inferno, which runs on a VM.
I think I was starting to understand what VMs were. If this entire operating system runs on a vm, this is what I wanna be doing.
The scale of something like Inferno is quite big, but in an afternoon I could run through the whole C code for this virtual machine. The above (image) is the opcodes that it had.
It was in part inspired from the actual hardware that ran the Apple Newton.
I could run it in an afternoon, and know exactly how it ran. I would look at the compiler that targeted the Dis virtual machine, and could see how they reduced the problem-space to a limited number of operations.
The people who worked really hard in the past to get away from time-sharing system would laugh, or they would cry, at how we got tricked into falling back in that situation where computers were something that was done behind a wall. You have a terminal, but that's where your power ends.
I wanted a way of doing computers that nobody could take away from me.
I would read people who would write entire operating system on a weekend, but I couldn't even map this idea onto things I was seeing, a bridge was missing.
The modern stack doesn't really work for us, it doesn't apply to the limitations that we have on the boat. We have 180 watts of solar. We just spent the whole summer with two 6-volt batteries, which is very small. When you're going down that route, at every turn people are telling to just put more solar panels, or to buy more batteries. That is such a modern way of solving your problem. In reality, technology like this(especially high-tech) rarely solves problems. It creates a lot of other problems, which on a sailboat is very immediate. Putting more solar would mean more windage, more chance of things flying off and cutting our limbs. More batteries would mean the boat would be heavier, it would stop us from being able to run away from storms.
The limits of a sailboat gives us a space for creativity, and these limitations became a sort of playground for us.
I am not the sort of programmer who could build Plan 9, or Oberon, or Lisp machines, but I knew I could write simple NES games and port them. I couldn't write an NES game inside an NES game, and that was such a shame. I wanted something that was completely bootstrapped.
So after looking at 6502, the Commodore 64 emulator was extremely complex, more complex than I could grasp. It was the limit of what I think a single person could understand. It seemed like a simple system, it was just a box, but writing an emulator for it was more than a weekend project. I was looking for something that I could nail in a single weekend.
6502 has a lot of mnemonics. When trying to implement the 6502 as a VM, it is possible to make a naive implementation in a week, or in a weekend if you don't do anything else. My knowledge of C is very limited, but luckily I had help. I wondered if this instruction set could be reduced further...
It got me thinking about complexity, and on what simplicity means. Kolmogorov, the mathematician, said that the index of complexity of any one system is the length of the program that could generate a specific string. I really like that way of looking at complexity and simplicity, because nowadays it's sort of convoluted. If you don't really know about hardware, it seems that everyone's trying to make the programmer comfortable, and not focusing on making an actual fast product.
Most of the people I know who are hobbyist programmers use type systems, memory safety, all sorts of training wheels(I'm kidding here, if you're a large team, please do use type systems for god's sake). When I work on my own, I want to reduce the problem space to as little a thing as I can make it. I don't like programming that much, and in the future I'm gonna hate it probably even more, so I want my future self to be able to reimplement my entire system at most in a weekend.
So where does that bring me? 6502 is beyond the scope of what I was hoping to do, because having done it twice already, I don't think I'll have it in me to do it again.
Looking back at that same era with Bell Labs, this was a time when people would build paper computers to teach kids to understand what a program counter was, how things moved around in the program, and how to navigate byte code.
No one can take a paper computer away from you. As long as you have paper and pen you can still solve problems, albeit slowly, and more painfully, but this is a form of computing that can be easily ported.
When I was looking at the BBC's Domesday book, I noticed that Alan Kay(Small Talk) also had a project that was in line with what I was trying to do. He thought, "Smalltalk is a vm, how much of a small VM can possibly run smalltalk-72."
He wrote a paper called The Cuneiform Tablets of 2015, he had all these words to talk about computers that went viral. He said things like, "Lisp was the Maxwell equation of programming." As a minimalist geek I was glad to hear this, it's what I wanted, but then looking into Lisp, I thought that a language that creates garbage by design wasn't really what I was looking for. This was a bit more like it, but it was a register machine, and I didn't see how that mapped to Smalltalk.
So ChifirVM is not exactly what I'm looking for, but there are plenty of other ways of doing computing out there. Instantly, I fell onto one-instruction set computers, like SUBLEQ. I thought it was a disgusting tar pit, but I could implement it in 15 minutes, and I would be really kind for my future self to make a system that could run on something that I could implement in 15 minutes.
The problem was that the toolchain required to write things that could actually have any purpose for it was immense. There were people who wrote C compilers down to Subleq, but this shifts the whole complexity back to the compiler, and that doesn't solve my problem.
There were computing paradigms which I absolutely loved, so pure, so beautiful, but at the same time they mapped so poorly to silicone that they didn't really work for me.
Thue(one of my favorites) is an esoteric programming language that has one operator, and only has a replacement rule. So you have an accumulator, which is a series of characters, and every rule, one for every line, is a replacement of what's on the left with what's on the right. It was very slow, but extremely powerful, and could be implemented in 30 minutes.
FRACTRAN was another OISC system. It has one operation: multiply. Its primitives were not bytes, or shorts, but fractions. It used prime encoding, which was one of the most beautiful mathematical concept that I knew of.
For a while I thought I had found my bedrock, I was going to use fractions to make computers. The only issue... is that it maps poorly to silicone.
SKI calculus was beautiful, it has this mathematical elegance to it. I went down that rabbit hole, and I barely emerged back alive.
I recommend everyone to take a Lisp detour, and then learn SKI, to read Smullyan, and to read about others like him who don't care about making products, but really want to explore software in a way that is very creative and artistic. I almost made this my bedrock, except that at the VM layer garbage collection was a bad idea.
Brainfuck has 7 instructions, implementing it would take 45 minutes, maybe an hour.
We are going up the abstraction chain here, and the pattern I was seeing was that the lesser a mnemonic or complexity that you reduced it to, the slower it got. Brainfuck was surprisingly fast, you could run a mandelbrot on a system that was quite small to implement, but that wasn't the sort of programming I wanted to do. It led me to another system...
Chuck Moore never came up in my books about Stanford, he must have been some sort of outsider. FPGA is not something I was very quite familiar with, but people were trying experimental computing systems with them. It didn't matter, it was goo, you could shape it in any way you wanted.
Someone wrote an FPGA system called J1, a Forth system. It had very few opcodes, but had one of the most beautiful mapping of byte code I had ever seen. The Stack Machine primitives were implemented so you could combine them with each other, with weird combinations of bit toggles on a byte you could have really interesting esoteric stack manipulators.
I was smitten.
It's a 16-bit Forth system, the biggest literal you can have in one of those words would be like 35k fixed. The arithmetic was really beautiful, but the whole system's maximum program that it could run was four kilobytes.
What I had in mind was slightly more than four kilobytes, but not much more. If I could have a full addressable, 64 would be really nice. I didn't want the complexity of the 6502.
The CHIP-8 is an interesting system that has a rich, vibrant community of people who build things for it during game jams.
The spec is quite simple, it fits on maybe two pages of paper, side by side. It feels like the paper computer in ways that I like, but it was not designed to be a general purpose driving vm. It is meant to be building things in the scale of Pong. The keyboard is like a hex keyboard with just numbers on it. My plan was to build something that could build itself in it, so I'd need a keyboard, a cursor-a weird keyboard for me would not cut it.
Machine Forth was something I stumbled upon after. Chuck Moore wrote Forth, and then he thought it too complicated. I thought, "what do you mean it's complicated? There's like no syntax, no nothing. It's space-divided language. Implementing a Forth system is really fast." Chuck More, being himself, thought that he could do without all this 'cruft'. He reduced even further to Machine Forth which has 32, or 24 opcodes.
I kept seeing Forth over and over again, and couldn't really grasp, or see the beauty in it for a while. If you've just come from Lisp, you have whiplash when you fall into Forth. It has similar beauty. Lisp would be prefix, Forth would be postfix. Following that mindset you think you could be anyone, or you could do anything, and you chose to be Algol, why?
Going back and forth between Forth and Lisp, I couldn't make up my mind, but I kept seeing people who would fall in love with Forth, but were really bad at selling it. With Lisp it's really hard to find any Lisp code that you can actually copy/paste in a project because everyone's using weird macros, and libraries, and nothing is portable.
Forth had a different problem, everyone I'd talk to would say Forth was the best, and that I should really use it, but then I'd ask them questions like, "how do I calculate the distance between two points?" I walked away from that interaction thinking, well, it seems like a lot of people like Forth, but don't write it. It was hard to find any code at all. I know Forth is not portable, but it was hard to find just actual use-case of Forth that wasn't someone trying to learn Forth by implementing Forth and then moving on back to Rust.
The way Forth works is really simple, you put stuff on a stack, it hits a token, pulls the item from the stack, etc. It doesn't have precedence rules.
It seemed like a really good bedrock for the things I wanted to do.
Machine Forth was something that was going to be something close to the metal, Forth at its core is something pretty close to Assembly, but I wanted something that mapped exactly to Assembly, something that would be like, 'one token would be one byte, or one short'.
So after a while I picked a bunch of opcodes, and I'd shown them to my friends, and they were like, "these are bad." They were people I trust. If I'd been on my own, I would have likely given up and not done it, or would be writing JavaScript still. They were extremely patient with me, they understood what I was trying to do. People like Sigrid, Alderwick, and Andrew had infinite patience for my stupid questions, they would tell me, "I wouldn't do it this way, but here's something that you could use to improve your system."
In the end, I ended up with this napkin definition of a system that I kind of liked. My goal throughout this whole process was to make a system that could fit on a t-shirt. I might lose my computer, especially on a sailboat, but I don't want to keep preserving all the documentation.
When I started sailing I was doing a bit of Ruby, when came time to gather all the Ruby documentation that I would need to be self-sufficient, the amount was overwhelming.
Being at sea for 50 days straight (see busy doing nothing) we don't have access to StackOverflow, our questions remain unanswered, now imagine if the site was gone forever tomorrow. At sea, without internet, if I wanted to look up how to fill a polygon, I couldn't do it, but if I had printed it? There are languages that make this more accessible, and others that make it really hard. Not having a background in C, I found it really difficult to go off sailing without my C documentation, but it is very extensive, and very clunky.
My thinking is that I'm probably going to lose the book at some point in the future, but if my future self had that t-shirt they'd probably be okay. I tried to reduce everything that I could do to fit onto a t-shirt, that's as much as I afforded in bandwidth.
If you're familiar with 6502 on the left (above image), it just looks like this vertical series of instructions. The little language I made is quite similar. You can map it one to one, and the resulting size of byte code is pretty much the same.
You might be familiar with this sort of Assembly language, reading as a big vertical column of characters, but I wanted to use the beauty of Forth and make a language that would be verbose in ways that it resembled writing a little bit.
Every morning when I wake up, and I open my programs, I don't hate myself for not leaving comments because the code itself is a bit more readable and self-documenting.
The first thing I tried to do was to make an assembler in that language. It was easy. The language is so simple that the assembler itself is also going to be simple. I generated a X86 application of the VM that could run its own assembler in that language.
The assembler is 3000 bytes.
It's nothing like a computer, but it will run bytecode. I felt like I had reduced the problems that I was solving with computers down to 32 things, and these 32 things I could probably implement in Fractran and Subleq, other more minimal computing systems.
It taught me about how the little functions you would use in JavaScript to convert a string into an end, you write it in Assembly, and it dispels the magic. Sometimes people say they love magic, but in truth they don't. I don't like magic at all, especially not when things start to break.
Douglas Adams said that, "The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at and repair." This is the situation I was in with programming. I wanted to understand the whole thing, and to have it be inspectable.
From there I started to build dawing routines, etc. I started to make implementations of Basic, of games, or of other emulators.
I have been writing in this little language for two years.
In his sit-down with Abner Coimbre for the Handmade Seattle 2022 opening, Robert Nystrom was encouraging everyone to make their own programming language and their own system. Many of the conference attendees share this view, but in other circles it is generally frowned upon. The first time I mentioned it to anyone I was laughed out the room. I liked to draw, and people were thinking, "what do you even know about computers?"
I would love to see everyone's programming language, to meet someone new and to ask them: "show me how many opcodes you have."
It is telling about a person's mind.
We built this little system that is tailored to host the games we make. It'll be alien to anyone else, and that's okay, we are not trying to sell you on this idea, what we want is for everyone to try and make their own personal computer instead of piggybacking on someone else's idea that's cluttered with artifacts. I don't think we've even begun to scratch the surface of what can be done with this.
A lot of people will try to tell you that we've tried everything with 8-bit, but by the time the NES was out, most of the game genres that we have today didn't exist. There are immense amounts of space left to explore, but now everyone has moved on, to VR, AR or other.
I made this entire talk in black and white, and I could have done a system in black and white too, but I made it in four colors. Imagine how much power you have with four colors.
One-bit can be a totally evocative, things don't have to be visually busy all the time. I think this maximalism of 'I need all these features' and 'I need to be doing this and that,' is exhausting. Learning to live without float points is actually kind of nice. There is beauty in really simple systems, trying to always scale things to fit everyone's usage is foolish.
As a closing remark, I want to say that I don't think that my attempt is the best solution to data preservation, I'm not even sure if I'm gonna be able to use that system in five years, but at least it's one attempt at trying to preserve things.
Permacomputing is inspired from permaculture, its goal is to build resilience. The resilience of permaculture comes from trying different ideas, and seeing what sticks. If we all jump on the same language, and the same ecosystem, it makes it really fragile when one individual can just buy the whole thing, then you're left with a system that was never truly yours.
Thank you for reading.