The great works of human civilization can last for centuries—but software often decays in just a tiny fraction of that time. How much should this concern us in this increasingly-digital age? And as software creators, what can we do about it? Adam and Mark discuss the durability of papyrus vs CD-Rs vs the cloud; open-source Quake and remix culture; flat file formats; and digital preservation efforts like The Internet Archive and MAME. Plus: sometimes you just have to draw the rest of the owl.
00:00:00 - Speaker 1: Which by the way, something that’s a little bit unique to digital systems versus classic analog systems, you know, if your wrench is rusty or doesn’t work as well, but it still basically works as a wrench, whereas if you have one bit off in your software, just crashes, you know, you’re out of luck.
00:00:21 - Speaker 2: Hello and welcome to Meta Muse. Us is a tool for thought on iPad and Mac. But this podcast isn’t about me as the product, it’s about me as the company and the small team behind it. I’m Adam Wiggins here with my colleague Mark McGrenigan. Hey, Adam, Mark, you were giving us a very interesting little workshop at a team summit recently about the use of iPads in aviation.
00:00:44 - Speaker 1: Yeah, so aviation is one of the most interesting and powerful use cases I’ve come across for iPads in the wild.
It’s so powerful and important that folks are willing to spend $100 200 dollars, $300 a year for high-end aviation-related iPad software. So there’s something right going on there, no pun intended.
And as I’ve been exploring that world, there’s a very interesting contrast and sort of technology share between these super shiny iPads and this new software that’s being updated constantly, and the very old general aviation aircraft you tend to see out there.
This is the Cessna from the 60s, which by the way, is basically exactly the same as it was 60 or 70 years ago.
And then they’re being flown with these iPads from 1 to 2 years ago, and it’s very interesting to compare and contrast those worlds, and it led into this topic today actually because we were noticing that longevity of the aircraft versus the almost ephemerality of the iPads and the software and how much churn there seems to be in that world.
So we want to dig into it on the show.
00:01:51 - Speaker 2: So Cessna, which is kind of a small private plane, is an extremely complex piece of technology and also one that is used in very high stakes situation, i.e. if it fails, you fall out of the sky and die, and it has very complex controls as well, but those are all I guess analog is the right name, but, you know, again, they look the same as they did in the 60s, even new ones built today, and the ones built in the 60s, they continue to, essentially, you need to maintain them, you need to replace parts and upgrade them to comply with newer aviation regulations, but Again, they haven’t changed much in that underlying technology and that’s so wildly different from the world of not just iPad, but software and internet in general where change is at an incredible pace and in fact that’s probably desirable in this what’s the piece of software that’s kind of the commonly used pilot software.
00:02:44 - Speaker 1: For flight is the most common one.
00:02:46 - Speaker 2: So that’s got maps, it’s got weather, it’s got flight routes, it’s got locations of other planes, all of the stuff is being presumably downloaded or even streamed in through APIs. It’s all very real timey and current information, and you want that, in addition to just keeping up with all the new capabilities of the iPad. So you get maybe that separation is nice, you get the benefit of the really fast moving software and internet world that’s on this device that’s strapped to the pilot’s knee, but it’s completely decoupled from the safety reliability oriented core instruments that are built into the plane.
00:03:24 - Speaker 1: Yeah, researching this reminded me a lot of navigation in cars. So my experience has always been if you buy or if you see a car that has like built in navigation, it’s always gonna be bad because it was designed 2 to 4 years ago and it wasn’t designed by a software company, but everyone just wants to use their iPhone, right, to navigate and they want to be able to plug it in and have it just their iPhone apps be displayed in the car. That’s a good way of embracing the reality that there is some shear between those two layers in terms of how fast the technology tends to evolve.
00:03:54 - Speaker 2: So then our topic today is software longevity, and I think you can slice this two ways. One is the software itself and how long that lasts or how durable that is, and then you can also cut it the other way, which is, yeah, software is eating the world or is invading everything from, you know, toasters to cars, and how does software’s dynamism impact the longevity of everything else as it creeps its way into the rest of our world. But as always, I like to start at the very beginning. So I guess first I have to ask what it means to you to talk about something being long-lived or having longevity, whether it’s software or a plane or something else.
00:04:35 - Speaker 1: Yeah, I think it’s the software being able to serve its stated purpose, which sounds very straightforward but has several important constituent parts. First of all, you actually need the software, you know, sometimes we just lose it. It needs to run, it needs to run correctly and needs to have access to the relevant data, needs to have access to the relevant pieces of the outside world in terms of APIs, and it needs the appropriate substrate to run on and interact with, and that’s probably other pieces that we could come up with. But there’s a lot of moving pieces that go into the software actually serving its original end goal.
00:05:07 - Speaker 2: For me thinking about that word longevity. I tend to think about what is long, I guess, what is a span of time that counts as long and of course it depends a lot on. What you’re talking about.
So if you’re talking about all of society, culture, humanity, then perhaps you’re talking in the thousands of years, hundreds or thousands of years. So for example, the long now is a phrase that we used in the local first paper, we tend to use it internally on our team to refer to longer term thinking.
This comes from the Long Now Foundation and the Long Now clock project, which is sort of this idea to build a clock that can keep time for, I think it’s 10,000 years. And it’s basically an art project, but it’s designed to inspire us to think longer term, and that can obviously connect to things about climate change or human culture, and since humans are naturally inclined to think probably pretty short term, actually really short term, we think about the day that’s ahead or the week that’s ahead of us, maybe at most, the year that’s ahead of us, we don’t tend to think in 10s or hundreds or thousands of years.
So it’s maybe a society level thing. I think for individuals and thinking about, you know, softwares that impacts our lives as individuals, I think there a human lifespan actually is a pretty good chunk of time to compare something to.
One interesting subreddit that I stumbled across years ago and still subscribe to is called Buy it for Life, and it’s essentially people just posting. Photos or anecdotes of products they purchased, they’re often something like a cast iron pan handed down by their grandfather or a pair of work gloves they bought 30 years ago that are still working just as well as the day they were purchased, and I think implicit in that is that there’s some kind of inherent beauty or virtue in something that does have this long lasting value versus something that’s more flash in the pan.
And it’s interesting for me to try to tease apart the pragmatic aspect versus the, yeah, that inherent beauty. Which again, it feels to me like a virtue, but I’m trying to like dig a layer deeper and see if there’s something practical that drives that. Maybe it isn’t, maybe it just comes from a place that it seems right to me that something like products you purchase, that their lifespan of the product could be measured in something that is a portion of a human’s life that’s not thrown away in a month or a year even.
00:07:32 - Speaker 1: Yeah, and I think we can come up with a few good reasons why longevity is valuable.
The zero reason would just be economic, depending on how long lived a product is, it has different economics.
On the one extreme you have consumables like toothpaste, you know, you use your toothpaste, it’s gone forever. On the other extreme, you might have. Extremely durable things like stone tablets and mason reconstruction that could last hundreds of years. And in the middle you have the classic capital goods, durable goods, things like really nice hand tools or really well maintained car that you expect to last at least several decades, potentially longer, like you were saying a lifetime or more, and the economics of those things are all very different. This goes a little bit back to the pricing podcast that we had a while ago. Another thing is I think just continuity, like, for example, if you run a business on a piece of software or you run your own creative process on a piece of software, there’s real costs to churn in that. And another thing would just be preserving history, you know, having access to the past. I think this is especially important with software because it’s very easy to lose that both in the sense of the software itself and the data that it was manipulating.
00:08:36 - Speaker 2: And that side of it makes me think of some of these submergent field of sort of digital preservation, archive.org is probably the one of the biggest players they’re doing incredible work to save copies of websites, but also product manuals and old video games, and indeed other kinds of software. And maybe that brings us to, when you do leave the world of durable goods, it brings us to the world of information and information longevity.
00:09:02 - Speaker 1: Yeah, and one other point I want to add relates information is, I think there’s a dimension of longevity, which is, and I’m gonna make up a word here, roll over ability, you know, the ability to buy a new or get a new version of something that rolls over your previous state. So for example, like maybe you rebind a book or something in that way roll over the pages, or you can put a new engine and transmission in a truck and you roll over the truck frame. But some stuff can’t be rolled over. Classically the black box data, just some opaque binary format now you basically like once that software is gone. So I think that ability to roll over is an important dimension, even if it’s separate from a single instance having long longevity. Hm.
00:09:42 - Speaker 2: Yeah, well, information has this particular property that exists on a substrate or a medium of something physical atoms, but in the end, the information is not the physical thing.
So, the book rebinding example, you can either rebind the same book or in some cases just transcribe it completely and as long as it’s a correct and accurate copy.
Older books are an incredible artifact, maybe they have margin notes, something about how the pages were made or whatever does carry some history.
There were the artifacts and objects of their own right, but ultimately the contents, the words on the page are probably what we care about. And so on one hand, information technology has actually been getting worse in the sense of the durability of the underlying thing, right? stone tablets were very durable, papyrus and later paper were less so, then we go into the digital world and it’s actually vastly less so, you know, everything from CDR. to cloud storage, the failure rate is pretty high, but because it is so easy, cheap to copy those bits from one place to another, to replicate it, you potentially have the ability to roll it forward as far as it’s worth your while to do so.
Right. Then if we sort of come to software as a special class of information, and it is information, but it’s highly dynamic, and I think that points to some of the particular challenges with it in, I guess that would be sort of the first category that I mentioned there at the beginning, which is the software itself and its ability to be long lived.
And I think some of this is cultural, you know, the tech industry is dynamic, it thrives on change, that’s sort of the nature of it and many things that are good about software and the internet and the tech world come from that willingness to embrace the new and almost an endless seeking of novelty.
But the downside is, yeah, you get something that is not just an upgrade treadmill, but it just seems like it can be very much the case that the lifetime of a product or of data or any particular piece of software can be measured in Again, years and only a small number of years, which is comparatively just a very, very short time span. But I guess one question is, why is that? Why should software sort of default beyond just sort of tech? Maybe that’s what it is, but I, I feel there’s more to it than just culturally, the tech world doesn’t do the buy it for life, built to last thing. I feel like there’s something inherent to the dynamic information nature of software that makes it hard for it to be long-lived.
00:12:11 - Speaker 1: Yes, I think that’s true.
There is a lot of things that need to go right for software to work, and you can see it in fact as something that’s strictly harder than preserving simple textual information for the following reason.
You have at least that problem to start because you have the source code, then you need to preserve information about the dependencies, you potentially have the compiled source. And then you have the whole run time around it very broadly defined, you know, the computer, the APIs, maybe even the data, and it’s just a very broad multi-dimensional, and in many cases ill specified. State space that if you don’t get your thing exactly right in that state space, it doesn’t work at all, which by the way, something that’s a little bit unique to digital systems versus classic analog systems, you know, if your wrench is rusty, you know, it doesn’t work as well, but it still basically works as a wrench, whereas if you have one bit off in your software just crashes, you know, you’re out of luck. It’s very high dimensional and it’s very sensitive to errors in preserving the environment.
00:13:15 - Speaker 2: I think it’s one wants to even think of a binary artifact.
So here you’ve got a compiled executable, you can download it, you don’t need the dependencies, you don’t need the compiler chain, and I think there’s the feeling that shouldn’t that just kind of work forever, but of course it’s as you said, it sits within this context of the system.
If it’s accessing files on your file system, it’s calling out to APIs on the network, it’s accessing APIs in the operating system, even something like it just makes assumptions about what kind of hardware exists on the keyboard and mouse as being standard input devices that exist, and then you go onto a touch device and those aren’t there. And so over time those fundamental assumptions and APIs and the system changes around it. And I think that’s gotten dramatically more so with the internet, where you might call out even something as simple as a static website that has a Google Maps embed. Well, now, a big portion of that website, this big pain is depending on this exact integration to the service, and that that service is still online. And so I think that the one-off binary build shouldn’t have worked forever, does make some sense, for example, games, which kind of don’t have as many hooks, so let’s say just like a single player game, not some Complicated cooperative thing, but it tends to have very simple inputs and outputs. Displays things on the screen, maybe it needs to save a high score file to the disk as opposed to productivity software where all those little integration points you drag and drop and how do you open a thing in the browser and what all the different APIs that use to be a good citizen on the system, and those change all the time. They should with an operating system that’s growing and improving and evolving, but then that means the apps have to keep up with that, and if they don’t, they fall out of date and eventually either become Somewhat irrelevant or more commonly just stop working.
00:15:05 - Speaker 1: Yeah, I think this idea of the complexity of the surface area and the APIs is very important.
You mentioned games. I think we actually had it much easier back in the good old days of games where he’s had keyboard input, mouse input, you draw to a pixel buffer, and that’s it.
The issue with games now in addition to stuff around networking and multiplayer is the graphics pipeline is incredibly complex. And unless you end up emulating earlier versions of graphics pipelines, I think there’s little chance that this stuff rolls forward, whereas you could plausibly roll forward writing into a pixel buffer, you can kind of go the other way. You can emulate a pixel buffer with our advanced graphics APIs today, but you have no chance of going the other direction. And I think this generalizes to API complexity broadly, and it’s one of the reasons why it’s gotten so hard for software to persist.
00:15:54 - Speaker 2: Games also make me think of the, we’re speaking briefly about the Internet archive and yeah, you mentioned emulators, and it turns out that games seem to be both something that people do want to preserve as a cultural history, but in a way, the best way to do that is not try to get the original hardware, but in fact, just to have these emulators like Maine, for example. And it’s really impressive, yeah, you can play almost every arcade cabinet video game from the 80s, just in your browser with these kind of JavaScript emulators that would load up the ROMs, which are basically the way they distributed binaries in those days, and play them, not quite as they were intended, because again, the hardware is different. Certainly the displays are different, but certainly we do preserve some of that legacy. Now another one from Games that maybe is your roll forward idea. I really like what ID software does, which is they take their classics, Doom and Quake, and so on, and after, I don’t know, they’re 2030 years old, something like that, they open source them. And that creates this interesting effect where maybe it’s less that people are playing the original, although I’m sure that happens. But then it becomes fun to just port it to weird places, right? Like you’ve seen quake that’s been ported to like be 100% Asky. I think I saw a Twitter thread recently where someone managed to, they broke open one of these digital pregnancy tests, realized that it had a reasonably good processor and display in it and decided to like get Doom running on it. Oh yeah.
00:17:24 - Speaker 1: Yeah, it’s a whole culture, you know, people running on the refrigerators and exactly, yeah, so in a way, it has turned this over to remix culture and the internet.
00:17:28 - Speaker 2: And so it gets rolled forward and becomes something that maybe people will experience none of this original form necessarily, but becomes woven into the cultural tapestry, let’s say through these fun little projects.
00:17:46 - Speaker 1: I think this also speaks to the importance of that API environment that we’re talking about either being specked or at least specable for the software to be rolled forward or actively preserved, either one.
Like we’re saying, old timey games, that’s a pretty specable environment like you need to build a compile C and relatively straightforward input and output. System and even if the games weren’t designed specifically with a really clean interface there, you could basically back it out after the fact and then do the porting.
Books and paintings, by the way, are examples of analog things that are very specable. They map down to text streams and bit maps respectively. But some stuff like our modern software is much more difficulty being specked, and so it becomes harder to do that sort of rolling forward or active preservation.
00:18:34 - Speaker 2: Now, some have argued that perhaps part of the challenge here is the rapid pace of change of that underlying system of those APIs.
I think it’s interesting here to contrast the Apple versus Microsoft approach. So I think that, you know, Microsoft with DOS and later Windows really focused on backwards compatibility, quite an impressive level.
I’m not exactly sure when the cutoffs are, but I think you could run DOS stuff natively on Windows till well into the Windows lineage, and then Much later versions of Windows could run much earlier versions, but that’s part of, I think what also created the relative instability and unreliability of that line of operating systems, such as when you need to support every conceivable thing that’s ever existed in the past and put it all together in the same box, things get messy pretty fast, and Apple, at least with the iOS world has gone very much the other route.
Which is they basically are releasing new stuff, a new OS update once a year, and if app developers don’t keep up to date, you pretty quickly just fall out of the App Store.
So, I experienced that with, I wrote a little puzzle game for the iPhone 10 or 12 years ago, I think it was on like iOS 3.
And yeah, my collaborator did some work to try to rebuild it with the new stuff and keep it in the app store, but I think by OS 5 or 6 it just fallen out.
And it’s a shame there’s basically no way to play this game that was, you know, fun side project and we spent some good effort on it’s just sort of lost to history because you have this very kind of demanding operating system that requires developers to be doing active maintenance or else it tends to go away.
00:20:16 - Speaker 1: Yeah, and while I don’t doubt that there was some contribution in the Windows situation to instability from needing to support all versions forever.
I think what really gets you is when you need to do that and things that are really end user facing like visual in particular, because the strategies that you typically use for providing stability are not viable in that world. So if you look at things like Unix style server operating systems, many programming languages, these are things that can have backwards compatibility for a long time, certainly decades. But the way they do that is they just never ever ever ever break any APIs and if you want to do something new, either A too bad or B, call it thing 2 and just keep thing one around forever.
00:21:00 - Speaker 2: Python 2 versus Python 3 comes to mind.
00:21:03 - Speaker 1: Oh yeah, well that one, that’s a whole thing.
There’s an element of that of not wanting to break Python too, but even just individual methods, you know, when we write our own programs, we think of, oh, we need to change the method, just like edit the source code.
Well, not so much if you have the entire universe relying on thing one, you just gotta keep thing one around forever and write a new thing too that people can call into. Anyways, you can’t really do that with like a windowing system, for example, or a user interface paradigm or how things look on the screen, right? You just gotta pick something and do it, and the old thing needs to go, and so I think it does become very hard to have that culture of very long lived durability that you see in some of these more systemsy programs.
00:21:43 - Speaker 2: Yeah, maybe that points to the fundamental trade-off there, which is durability in many cases means stability, which means less change.
But change is how we get things that are new and better in the world of technology and computers and the internet is so at the very beginning.
There’s so much unexplored space, and it would be a shame if we just said, OK, well, we’ve been working on making these computers here for a few decades now. Yeah, pretty much how they are is probably it, it’s probably good for the next several 100 years. Let’s just hold it there. That’s not what we want to do. Uh, we want to keep changing and improving. So, yeah, there just is a trade-off between those two.
00:22:18 - Speaker 1: Yeah, I think there’s a very real tension, and I agree we want to keep exploring. I do think that in adopting this attitude of exploring and taking risks, we are going to find some things that in retrospect were not good choices.
The intuition here is, I think we’re projecting these very dynamic system benefits over lives that we have historically associated with durable goods, you know, so we get the shiny new computer enabled software enabled thing. And it works great the first year, and we think things like this, before we had computers, they worked for 20 or 30 years, therefore this thing is gonna work for 20 or 30 years, and it’s gonna be awesome.
And we’ve kind of projected that all forward, and I think in many cases that’s not gonna be true, and so we’re going to have to confront the downside of that trade-off.
00:23:04 - Speaker 2: And do you think the downside there is mainly economic? You buy the smart refrigerator seems great for the first year, some cloud service goes offline, suddenly your refrigerator doesn’t work anymore and you got to throw it out and get a new one, or do you think there’s something beyond the economic side?
00:23:18 - Speaker 1: Unfortunately, I think there are several potential very serious downsides.
And just to give you a couple, one that we’re already seeing is this issue of repairability or even modifiability. Again, if you look at these durable goods categories like tractors is a very prominent example right now.
Traditionally people could service and maintain and change and modify and improve their own tractors, but as tractors increasingly become like iPhones, you know, these black box computers in many ways are highly computerized hardware, potentially you can’t service them yourself, you can’t repair them yourself, you can’t modify them yourself, and that has all kinds of second order effects. There’s also this whole thing about like surveillance basically, that I think we’re sleeping on a little bit, but we don’t need to go down that whole road today.
00:24:08 - Speaker 2: On the economic side, I think there’s also the incentive, which is basically Apple and other phone manufacturers have done very, very well by always creating that shiny new.
The thing that comes out every year or two and you think I want to upgrade, it’s even often built into cell phone contracts and things like that, that people want to get a new phone very frequently, and I think that’s paired with sometimes the operating system upgrades or maybe the visual refreshes.
It’s almost more like fashion, right? The fashion industry found this very clever way.
To get around the fact that their IP isn’t really protectable, which is they just make new fashion trends that totally change for a couple of years. So then you like need to go out and get a new wardrobe. It’s not that the clothes wore out necessarily. It’s that you want to be up to date and you look behind the times when you’re strolling around in your bell bottoms and those have been out of fashion for 3 years or whatever it is.
Yeah. And so I think there’s an element of this kind of a fashion desire to get the new thing. That has driven a lot of revenue for, for these companies.
I don’t mean to imply that that’s some kind of a cackling in the back layer, they’re using this opportunity to prove and genuinely make their products better, you know, each version of the iPhone has a better camera, that better battery life, the bigger, brighter screen, it is genuinely better, but it’s also the companies are very incentivized for a high pace of change that has you buying new products all the time as opposed to something longer term.
All right, so let’s say for the sake of argument that we’ve decided that longevity in our technology products is desirable, whether we because we think it’s about preserving history or because we think there’s inherent beauty and timelessness, or we think there’s a good economic argument. What can we do as engineers and designers and product managers to make our software stand the test of time?
00:25:55 - Speaker 1: Yeah, I do have some techniques there. I do have to preface it by saying that I think there’s a large element of just draw the owl here. I don’t know if you’ve seen this meme before. But you got to sit down every week for 30 years and make sure the software doesn’t break. And that’s sort of a necessary but not sufficient condition, I think.
00:26:13 - Speaker 2: And the implication there is also just maintenance, right? And that maybe ties into some of our discussions about sort of software supported by subscriptions versus one-off payments, which is it really is an ongoing effort, an important ongoing maintenance effort rather than a one and done.
00:26:32 - Speaker 1: Now that said, I think there are things you can do to make this maintenance effort much more feasible.
One thing we’ve alluded to is this idea of narrow defined APIs, your software has to run in something, and the broader, the more complex, the more ill specified that environment is, the harder it’s going to be to do this job of keeping the software running.
And this would include things like taking on few dependencies. Minimizing weird binary compendencies that are hard to compile, limiting the APIs that you access, things like that.
Another thing that I’m very big on is really focusing on data.
Often, especially in this world of rolling forward over multiple generations, what you really want to keep is the data. This is more true probably for like productivity type stuff than games, for example, but data is very important and if you want to roll forward data, you either need to adopt an existing open data format, existing data format, or you need to do your own, which can be done, but it’s a lot of work.
I think that programs or program lineages, if you call it that, embrace this idea of really focusing on the data so that that can always be ejected and rolled forward into the next generation of software.
00:27:47 - Speaker 2: And one good example of that from my personal life is I’ve had the same calendar in some ship of thesis since, I think I started using a product called 30 Boxes, I don’t know, 15 or 20 years ago was my first kind of like digital online calendar, and then have migrated from one to the next as new stuff comes along. And hopefully there are some standard formats here, I think it’s Calav or something like that. There are also, I think these companies are often incentivized to make importers where you can kind of slurp down from the other company’s API or whatever your existing calendar, but it would just make it much harder to move forward.
I probably could, but it’s just I do put things on my calendar that are either annual recurring things like, you know, pay the taxes, but also things that I really wanna make sure I remember that are pretty far out, and I like having that history in the past. I like to know when a particular thing happens, so I can look it up occasionally.
So having that my calendar is something that for me feels distinct from the particular calendar software I happen to be using at the moment.
00:28:51 - Speaker 1: Yeah. I also think there’s a social slash community element here, where if the thing that you’re building is used and demanded by a lot of people, there are a lot of forces that are going to be pulling in your direction for that thing to be preserved. It’s kind of like this data quality issue that I think we’ve talked about before in the podcast where the quality of data tends to be determined by how often and carefully it’s read, not how often and carefully it’s written, and I think similarly with software, if you have many people constantly trying to run the program in different environments that will encourage it to become persistent.
And that by the way, goes down recursively to your dependency. So one of my favorite examples is Site. If you use SQLite to write even your semi custom data format, if you write it into SQL database, you’re absolutely going to be able to read that in 30 years, even if you do nothing yourself. Whereas if you roll your own format on disk, there’s a very good chance that it will at some point become totally lost.
00:29:49 - Speaker 2: Yeah, SQLite I think is a favorite example for us in many cases of sort of well made simple software that does one thing and does it well. They have a great page though about their long term support. I’ll link that in the show notes, where they list off some techniques like testing and making their database files cross platform and disaster planning and not embracing hot new technologies with too much gut.
But actually, I think one of the things that points to that to me feels like maybe the most fundamental is to just think about longevity in the first place and just care about it in the first place.
Part of the reason I thought this topic would be an interesting one.
I think again, tech industry skews young. I, I think also had the same. What is it like gap in my thinking as a younger person, which was I just didn’t have the longer experience to be able to see what it is like for a software that I’ve created that’s in production for 3 years, 5 years, 10 years and longer, and what happens over time.
So part of what I like with the SQL Light long-term support page is they basically lead with a statement of our goal is to support this until the year 2050. And so, will they achieve that? Hard to know, but just by like making that statement, calling it out as a value for themselves, and thinking about it actively, maybe this comes back to the long now clock idea, which is, they may or may not succeed, but certainly one good way to not succeed is to not think about it in the first place. And so this makes it a first class concern.
00:31:20 - Speaker 1: Yeah, I think that’s a good point. You gotta think about longevity for sure and for me that circles back to this idea of data. I see people typically design and implement programs again thinking, for example, productivity type software. We tend to focus on the interface and the behavior because that’s what users are demanding, that’s what the short term success of our business depends on.
But the reality is that interface, including its implementation, is going to almost certainly fully churn, at least once, potentially several times over the course of software, but what’s not going away is data.
And so one thing that I think is helpful for longevity of software is really focusing.
On the underlying data model, not only in the sense of how it’s stored, which is mostly what we’ve been talking about in the podcast so far in terms of text files or SQL light, but also what is the data model itself, like if you were to draw out the SQL tables or equivalent, what are the boxes and what are the labels on them, it sounds really basic, but it’s a step that I feel like people often skip over and then it leads to all kinds of longevity headaches down the road.
00:32:21 - Speaker 2: Futureproofing, I think is a word we sometimes use in general for talking about helping things be longer lived, but certainly I think that’s most notable in the data model, precisely for the reason you said, it’s just much less changeable and harder to change, and heavier weight to change.
00:32:37 - Speaker 1: Folks, you want a practical tip? Here it is. Put a ID on everything, put a version on everything, and only write new data, don’t plete all data.
00:32:45 - Speaker 2: Yeah. That is a deceptively simple list, but I think it probably reflects some pretty deep, I’m guessing painful experiences in your career. Got any war stories for us?
00:32:58 - Speaker 1: Well, the version one, I remember well, cause you’re the one who taught me.
We were working on the Hiroku runtime, where the runtime builds applications into binaries and they are deployed and run.
And I was working on, what the time was the unlabeled version one of the system. And you point out, Mark, I’m sure at some point we’re going to have a different version of how we package up these files. So I’m going to give you a tip, which is that you should write version 1 next to all of these, and then when we go to do version 2, it’ll be a lot easier. And sure enough, that made a huge difference, and we did eventually get to version 2 and then 3456, I think after that.
00:33:37 - Speaker 2: And that was something at that point, you know, you were earlier in your career and I had had a few of those bumps and difficult data migrations and had realized that this one simple trick can save you some headaches.
Now, of course, the future proofing should be balanced against what’s usually called, you’re not going to need it, which is over, I guess, engineering for something that an unknown future.
So you’re not trying to create a system that has totally flexible properties and tries to take into account every feature you might ever want to possibly add, I think that tends to create abstractions that just get in your way. It’s more of these small, simple tricks that don’t get in the way, but create opportunities to more easily make changes that you can’t guess right now in the future.
00:34:23 - Speaker 1: Right. On the data modeling side, the advice that I always give is, you’re just trying to accurately model the world as it really is, cause the world as it really is, isn’t going to change. It might expand as you add more features, right? But if you have incorrectly portrayed the world in your data model, and then you also go to model more of it, you know, now you have 2 problems plus there are section 3 problems, whereas if you had correctly modeled the part of the world that you were working with, it becomes relatively straightforward to extend the model to the new features.
00:34:53 - Speaker 2: And to go just a little bit technical for a minute, the version thing is also making me think of, I think it’s the image file format, maybe TIFF is the one I’m thinking of, but this approach, I think, has been used in a lot of different places, and I feel like you’re doing a version of this in the local first sync infrastructure we’re working on right now, which is to have chunks that have kind of a type in front of them. And so you can add new chunks to a stream of data or to a file that older versions of the software can load, recognize, they don’t know this chunk, and they can kind of skip past it and just try to interpret the rest of it the best that they can, you know, maybe this is something that web browsers do pretty well, which is you load a new. Page in an older web browser, if there’s some unsupported features, it doesn’t completely break. It just does its best to render it.
Of course, the older the browser is and the more new features you’re using, the more and more sort of ugly and unusable the page is going to become, but it makes its best effort. It’s not that it just gives up the moment it sees something it doesn’t understand.
00:35:54 - Speaker 1: Yeah, and this gets into the peculiar challenges of longevity with distributed system software. We’ve been talking about longevity in terms of supporting the past with distributed systems. You gotta support the future because the future arrives unevenly. So that’s the sort of challenge you deal with when you’re working on distributed systems like the data synchronization layer from use.
00:36:14 - Speaker 2: Yeah. As usual, Link and Switch has some good research on the subject with the data lenses in the Cambria project, and one of the things there is about sort of data migrations, but it’s assuming you need to translate both ways.
You need to translate, I guess you say back in time to older versions and forward in time to newer versions, but in fact it’s actually more complex than that because there may be just a branching tree of systems that all understand the data in slightly different ways.
And the other small thing, it’s sort of obvious in some ways, but I think it’s kind of a Lindy effect of file formats. So Liny effect here, of course, being sort of the idea that something that’s been around a long time probably will be around a long time. So I’m always a big fan of those flat file formats, PNG, JPEG, PDF, plain text, because they’ve proven themselves to be durable over the long term. And they don’t always do everything you need, but where you can, it’s really nice to just kind of go down to that simple common format that’s understood by many applications, both now and in the past, and potentially will be in the future. Bringing all these ideas together in a very practical and relevant area for us. How do we think about this for Muse, the products, the data aspect that you mentioned, but also the team and the company?
00:37:37 - Speaker 1: Well, I think there are layers. I think the first layer in the sense of what’s likely to be longest lived is from basically day one we’ve supported flat file export. So you can export your board to a PDF or your images to a PNG. You can export your whole corpus to a zip of a bunch of flat files of these types, and that way you know you’ll always at least have your data in this format that everyone knows how to store essentially forever. And yes, it’s lower fidelity. Obviously you can’t easily edit it, like you can’t amuse, but that’s sort of the tradeoff you make there is you have access to this very durable format for all that data.
00:38:17 - Speaker 2: And I often find for archival purposes, I have put a lot of effort in the past to preserving things I’ve created, whether it’s an essay or back when I used to make music, trying to preserve all those kind of source files, video, similar thing, and usually I find the kind of flattened artifacts read only. is really kind of what I want in the long term because I do go back to reference those things or look at them for inspiration or just for take a walk down memory lane, but I very rarely want to edit, even if I could. And so in that sense, just it’s actually superior to flatten out to an image or a video or something.
00:38:55 - Speaker 1: Yeah, it’s a good point. In many ways, it’s a benefit, not a downside. I think especially when you have this habit of building up an archive that’s all in these handful of formats, everything is basically Texts, PNG, PDF, maybe video or MP3, you can browse it all together and you can do it very quickly, basically ininder or the equivalent, whereas you can imagine, if for every picture you wanted to check out, you had to open up Adobe Photoshop or whatever, you had to wait 2 minutes. So it’s nice just to have that very lightweight archival copy.
00:39:25 - Speaker 2: Now for the app’s kind of data itself, and we did toy early in the company with having it be part of our product to try to make that be something that’s more like the native format that Muse is saving to is like a Dropbox folder, for example, and it just turns out that we couldn’t achieve this.
Things we wanted to either being on iPad or things we want to do around sync, just wasn’t compatible with that sadly. Right.
But what we do try to do is really make the format that the app stores its data and on your device, on iPad or now on the Mac, a format that is something we will support over the long term and obviously we’ve only been at this for 2.5 years, 3 years if you count the research time. But already we’ve had people who have been around from, you know, the early days of the beta and we’ve had to, including our own personal corpuss, you know, I’ve got 20 some odd gigs of stuff and use from my now years of doing all my thinking and strategizing in it.
And the engineering team spends a lot of effort on carrying forward as we build major new things, whether it’s something like the flexible canvas, or whether it’s something like, yeah, we at one point converted the ink from raster to vector, we have another huge sort of data migration here with the introduction of sync that the team’s working on right now. It’s a high stakes, but really important operation to bring that across.
So ideally, you could have been, and in fact, many people have been using these that whole time, and you can go back and access your very earliest boards and everything is pretty much just as you left it, you can still edit it. It’s all still right there.
00:40:56 - Speaker 1: Yeah. Now that data layer and rolling forward versions does become much trickier in a world where you have multiple writers and especially if those writers are potentially not programs under our control.
Certainly right now with Muse in the near future, the deal is gonna be, you have Muse, the app, writing your data, like a traditional cloud app would.
But you can also imagine a world where, and we have in fact imagined a world where we have things like bots and end user scripting, and plug-ins and extensions.
And that while everyone’s participating in a data model, so the stakes becomes much higher and you can’t just roll forward the world when you deploy a new version.
So that’s why as we are working on sync right now with an eye towards this world of scripting and extensions and so forth, we’re thinking very carefully about the data model because you basically need to support that forever or go through a very troublesome migration process. So we’re trying to lay that foundation now even though it’ll be some time before all those end results come to fruition.
00:42:02 - Speaker 2: Now that’s the data, so how about the software, the app you download or the product more broadly over time? Is it important for that to be long-lived and how do we think about accomplishing that?
00:42:14 - Speaker 1: Well, I think it’s harder for that to be as long lived as the data because that’s less in our control, you know, you’re participating in the Apple ecosystem, as we talked about this ecosystem has certain characteristics that lend itself to being medium term life in practice, you know, apps or individual builds of apps don’t last a decade, even if the app itself does.
So we do, we can there, we have minimal external code dependencies, and we have no critical dependencies on third party network services except the ones that Apple requires for any app to be able to, you know, be downloaded and paid for and so forth. But we are participating in that model if you got to keep the app sort of up to date on that time scale of quarters or years as the underlying runtime environment on iOS and Mac changes.
00:43:01 - Speaker 2: Yeah, exactly. I would think of the software as in any particular build as being something that at least I hope would continue, you know, in some future where I don’t know, someone’s running this in an emulator, for example, as long as the operating system, APIs are kind of what’s expected, and again, we don’t tend to use heavy amounts of those, we tend to keep it pretty light, but assuming, you know, we can read pencil data, for example, on the API that’s expected. That should still work.
And a big part of that is the network side. So many apps on your phone, on your tablet, and increasingly on the computer, they expect some kind of service to be online and I tend to discover that because I really like working in offline environments. I like taking a long train ride or working on a plane, or just going to one of our team summits in some weird rural location with weak internet and actually that’s good.
To be more kind of focused on what’s in the moment, but then you really quickly get exposed to these weird timeouts and network errors and can’t contact or whatever and I’m thinking, why is this software even contacting the network? Those hooks are just so pervasive now, we hardly think about it, but we’re working for use to make it be very much the network is optional.
You get some nice benefits, but you don’t need it once you’re logged in.
And then I would probably say, you know, that’s any individual build of the software, but I would say on the product side it comes down to more of a team and viable economic model and things that we’ve talked about extensively on the podcast here, but if we have a product vision we think is good, that the members of the team are invested in that long term and We’ve got a sustainable way to sort of fund it or work on that over the long term, and it even ties in with something like just running the team at a sustainable pace, right? Avoiding this drive to burn bright but burn out quickly that maybe does tend to come in the kind of hypergrowth world of startups. And instead be thinking a little bit more about, yeah, we want to be doing for this for a while. We know great products take a long time to build, and that’s not just like total person hours to build, but in fact it’s wall clock time for people to use in the real world and get back feedback and expand and improve upon it. So running the team in such a way and having a vision that we think that we’re all committed to in the longer term, again thinking in terms of human life scale spans, not months or Just years, but maybe slightly longer than that. I think that’s how we get a product, you can really depend on the longer term.
00:45:31 - Speaker 1: Yeah, another way you could think about this is lining up the reality of the software environment, what you’re communicating to users with how the team and the company is run.
So it’s not even necessarily that you want all stuff to be exactly the same forever. It’s more like you want these promises to line up. So for example, as we’re developing new features, you’re really annealing them in. You’re not saying as soon as you come up with an idea, oh we’re going to support this forever. Initially you do a beta and you say, we’re gonna support this for the beta, it’s gonna be there for a few months and longer if it works well and not if it doesn’t. But then when it kind of graduates to the next layer, it becomes something that you support longer term you have corresponding infrastructure around what the company to do.
00:46:12 - Speaker 2: And from the perspective of customers, maybe some of that comes back to trusting the team and knowing the team, right? That’s why I think it’s important to communicate our values, our philosophies and our motivations through writing, through podcasting where someone can Dip into that world a little bit, get to know us and have some sense of what’s driving us and where we’re going and what we value, and if you share some of those values and you see a similar vision, then you can maybe trust that the software will change and evolve over time, but it will change and evolve in directions that on net will hopefully be Improvement for you, at least not a downside or a disruption, whereas if, yeah, you listen to us talk for a few hours in this podcast and you think, um, these folks are, you know, thinking about things in a different way than I think about things, and then maybe it’ll happen to be that the product does something useful for you right in this moment, but maybe over the longer term, it won’t be evolving in directions that are as useful to you. So that’s the reason in my own life. I try to use software as much as I can from teams I know, which it helps that, you know, I’m in the industry, I’m reasonably well connected now thanks to Twitter and other sources. And of course, it’s really fun to use your friend’s software, you know, using The Arc browser or something like that, for example, or some of the not boring apps from our friend Andy works as a couple of examples, but I also like that because, yeah, it feels different when I know who’s behind it and what they’re trying to achieve. It feels good both to support their work, but also the sense that sort of, not quite like we’re a team, but we’re working together to reach a similar future, where they’re working hard to make this software that can serve my purposes, and I’m, you know, giving them feedback and using the software and paying for it, and that’s my contribution, and together, hopefully, we work towards a better future. Well, let’s wrap it there. Thanks everyone for listening. If you have feedback, write us on Twitter at @museapphq. We’re on email, hello at museApp.com. And of course, we definitely appreciate it if you leave a review on Apple Podcasts. And Mark here is hoping that in addition to our business venture being long-lived, that indeed this podcast will be long lived.
00:48:30 - Speaker 1: Right on, Adam.