Thinking and creativity require privacy. In this data-intensive age, what does “privacy” mean for a tool for thought? Mark and Adam discuss product decisions in the context of digital privacy for the tech industry and society overall.
00:00:00 - Speaker 1: To us, it’s very important that we design this all holistically. There’s a lot of research, for example, on cryptography schemes that assume key management or on collaboration product designs that assume the server can just read all the data. And in order for this to work with Muse, we need all of the product design, the collaboration technology, the key management, the cryptography, the mental model for how people think about documents that all need to line up.
00:00:31 - Speaker 2: Hello and welcome to Meta Muse. Muse is a tool for thought on iPad. But this podcast isn’t about Muse the product. It’s about Muse the company and the small team behind it. I’m Adam Wiggins. I’m here with my colleague Mark McGranaghan. Hey, Adam, and Mark, I like to listen to podcasts in the morning, but I understand that you have a slightly more unusual and in-depth source of audio.
00:00:55 - Speaker 1: Yeah, this morning I was actually listening to the real-time oral arguments in the US Supreme Court on their very important ACA slash Obamacare case.
This is obviously a very big case for the US and for many of us personally, and so I was keen to listen in and see what the judges were thinking.
And this is notable because I think until recently you couldn’t actually listen to these broadcasts in real time. There wasn’t the C-SPAN equivalent for the US Supreme Court until I think COVID hit and they started doing everything via audio, audio, you know, Zoom or equivalent. And I think at one point actually, they didn’t release the audio to the public until quite a long time after the arguments had concluded. I think they did it every term or something, which is 6 months or something, and then more recently they changed it to be every week, and then now they release it in real time.
And of course, that’s interesting as an interested citizen, but also it kind of connects to our topic today of privacy, because one of the ideas that we’ll talk about is how visible or non-visible your work is while it’s in process.
00:01:51 - Speaker 2: Yeah, privacy is a huge topic and something on our minds right now because we’re making some product decisions for the sharing and collaboration features that will be forthcoming from use.
So in the process of working through this as a team, where do we make trade-offs on things that necessarily results in a kind of a zooming out, you can’t help but to look at the larger context, which is OK, there’s what do we want to do for our product, what matters for our customers, what’s technically feasible, what do we value as a team.
Then you zoom out a little bit from there to OK, what’s going on in the technology industry.
Obviously this is a very, very big topic for the tech industry right now, and then you can zoom out even from there and talk about the society-wide impacts and you know, what does privacy even mean? What can we expect, what’s important or not important in terms of our lives as citizens, but also just as technology changes, we may need to adapt to what we can reasonably expect in terms of privacy.
Yeah, as you know, I always like to start at the beginning with the definition or something of that nature. So what does the word privacy bring to mind for you, Mark?
00:02:58 - Speaker 1: Hm. Well, I don’t have a super nice prepackaged definition, but what’s coming to mind is a sense of agency over who does and doesn’t have access to your work. And you might exercise that agency by saying only I can read my personal journal, for example, and so that’s private to me. Or it could also mean that we are working on a project together and so I want you and me, Adam, to be able to see some work product, but no one else. Or it could be that you want to share it very broadly, and that’s your choice as well. So some sense of controlling who does and doesn’t access your work.
00:03:31 - Speaker 2: Yeah, when I went looking for kind of what is actually the definition here versus my own vague sense of what counts or doesn’t count as privacy, which probably, by the way, has changed over the years, but the canonical one that’s often linked back to is a piece in the Harvard Law Review in 1890 called The Right to Privacy.
And they point out that some of these American values of right to life, right to pursue happiness, right to secure property originally maybe meant something more practical, you know, property was your cattle, for example, but now you fast forward here now over 100 years ago, they’re writing and they say, well, wait a minute, we’ve.
Started to recognize more of a spiritual nature of man’s feelings, his intellect, and so maybe these rights that we talk about broaden a little bit and the term property may include intangible things like your notes, like your thoughts, for example. They actually just use the phrase at one point, the right to be let alone, you know, maybe in modern phrasing, we would say the right to be left alone, but the idea of Maybe why you don’t want everybody in the world to have your phone number is you don’t necessarily want the equivalent of spam calls coming in. You want to give that to people that you have this trusted relationship with that you believe they’re gonna call you because there’s someone you wish to speak to, you have an existing relationship, something like that. So that lens for privacy I found thought provoking.
00:04:58 - Speaker 1: Yeah, that’s a very interesting definition because it gets at a problem that I’ve seen with a lot of the privacy discussion, which is It’s very tempting to try to imagine or infer or argue for some very natural and fundamental right to privacy.
Obviously, if you like privacy, you’ll tend to do that, and I often see this in the form of privacy is a human right or privacy is a natural right. And I certainly think you can make arguments to that effect, but it kind of dodges, to my mind, the real fundamental question here of what are the trade-offs, what are the benefits, what are the costs, and what work are we willing to do as a society to bring about those benefits if we want them? Because unlike something like perhaps the right to life, you know, you can get that just by not ending other people’s lives, right? Whereas privacy is is much more complex than you need to build cryptography, you need different business models, right? It’s much harder to grapple with.
00:05:49 - Speaker 2: Yeah, another way to think about, at least for these intangibles for information privacy, which is chiefly what we’re concerned with in our business and in the technology industry generally, it’s really communication is often the thing, you know, Muse as it stands today keeps all your information on the device there and putting aside some threat vectors like someone stealing your device, for the most part, that means there isn’t so much to worry about.
It’s once you go to share it with another person or share it digitally over the internet, that’s where. Things get trickier and I liked your courtroom example because another one I had kind of sketched down was the social contract or the common legal protection that’s given to what they usually call client confidentiality or patient confidentiality, so attorneys or doctors or therapists.
The idea is that You are going to have a private communication with them and you can expect both kind of from just a manners perspective but also in some cases legal protections for what you say there. and that gets a little bit to what’s relevant to our business, which is in one of those communications speaking to your therapist, for example, but also sketching in your notebook.
If I need to think about, OK, everyone in the world can see this either now or in the future. Maybe that is going to consume some part of my brain figuring out how comfortable I feel with that, whether I want to alter what I’m saying or writing a little bit, and there’s something freeing where I say I’m in this communication with one other person or with my notebook only, so essentially myself or my future self, and I can reasonably expect that no one else is going to hear this. or be privy to it, and that frees me in a lot of ways to be creative or to really open myself up. And it seems to be a common human experience that it’s easier to truly open yourself up when you know exactly who’s on the receiving end of that.
00:07:40 - Speaker 1: Yeah, totally. If I think about the benefits of privacy, to me, that’s one of the three big legs is this idea that when you have control over who has access to your thoughts, your work, and your data, especially when that’s quite limited, it encourages and allows creativity.
And that might be creativity in terms of your personal journal, right? You’re much more inclined to write something to share it with yourself, if you will, if you know no one else is gonna see it.
But it could also mean You’re doing some private brainstorming, and that would be very different if it was just you and me versus if we were in a stadium with 50,000 people watching, right? And it’s just that’s kind of how humans are.
I think that’s a big piece.
And also, it connects to what I consider to be the second big leg of privacy benefits, which is it allows you to manage communications.
So it might be the case that you eventually want someone to know something, but while you’re working on it and you’re preparing the communication. You don’t want them to be processing all of the raw stuff. It’s something that I encountered a lot as an engineering manager, you know, if you’re working on an organizational change or something, right, you don’t want people to be reading all of the raw discussions and debates about how it’s going to happen. You want a clear and coherence and well executed communication plan. That’s again, you need privacy for that.
And just to mention what it is in my mind, the third leg, and we can perhaps talk about it later, but it does have protection from governments and other large concentrated powers. And for me, that’s especially important with electronic data and communications.
To my mind, this stuff is so sharp because it’s so easy for it to get replicated, for it to get distributed, for it to be intercepted, for it to be eavesdropped.
In a way, that’s just not the case with something. Like paper, you know, with paper, it’s actually quite hard to make a billion copies of paper. It’s also very easy to reason about where the paper is going because it’s in this physical world that we have a lot of familiarity with.
We don’t have the same intuitions or ability to reason about electronic data in terms of how long it could be persisted, how many people can see it, and all the ways that it can be processed. So I think overall, it makes this problem of concentrated data in the hands of large powers very sharp.
00:09:38 - Speaker 2: Yeah, and I think in the analog world where you’re just thinking about who might be overhearing me in my office, or as I walk down the street having a private conversation with a friend.
That you can kind of scope and time and impact, but when I put my photos, my notes, my whatever it is into electronic databases that can be replicated potentially forever, I think of something like LiveJournal, which was this journaling slash blogging site 20 years back that was very popular. A lot of people, especially young people, poured their very private thoughts and things about their lives into under the reasonable expectation that was only going to go to the few friends they’ve scoped to.
And then in the meantime, it’s been sold several times to several different choirs. All that stuff is in there, what someone wrote 20 years ago in a database that’s now in the hands of someone very different from who originally it was in the hands of and and I think it’s just we don’t quite yet have the capacity. to actually reason about.
00:10:33 - Speaker 1: Totally, and just to expand on this a little bit, this points to two other ways that electronic data privacy or non-privacy can be very sharp.
One is this time element where the data can persist and indeed accumulate and move around for a very long time. So we might say, oh, you know, with our current privacy practices, nothing that disastrous has happened. Well, we actually don’t know because the half-life of this data is probably 50 or 100 years. So we’ll know in, I don’t know, 200 years if this is actually a bad idea, but we can’t really say that it wasn’t until all the data has fully dissolved into the ocean or something. The other huge thing here is how humans are or aren’t part of the process.
So again, with electronic data for collecting it, for storing it, you just need to convince basically a few people, it’s a good idea. So if the NSA wants to read everyone’s emails, they convince a few people at Google and Yahoo, and that’s basically it. And then they get billions and billions of emails. Whereas if you wanted to eavesdrop on someone in the physical world, you got to pay someone, they got to go out, you know, to the rooftop and That’s expensive.
And if you have a ton of them, you have to actually convince all these people to do this every day and maybe actually have trouble convincing thousands and thousands of people to do this. So there’s this kind of like human rate limiter friction that you get if you want to do wide scale data collection in the physical world, but you don’t have it in a digital world. This is another reason why I think the digital stuff is so sharp and potentially dangerous.
00:11:53 - Speaker 2: Feels like there’s a parallel there to spam, postal mail versus spam email, which is people sending you unsolicited advertisements in postal mail has always been a thing, it’s still a thing.
But it’s limited by sort of physics and the cost of actually getting that brochure or whatever into your mailbox, and digital is just so cheap and so fast and so easy to do in this kind of anonymous, unaccountable way that then it goes from being uh maybe an advertisements in your postal mailbox or a minor annoyance to being something that potentially Overwhelms the systems of the internet and makes, you know, at one point threaten to make email a completely unusable service, and I think basically every kind of communication technology that comes online and gets substantial traction has to deal with that same kind of spam and abuse problem for that same reason. It’s just so cheap to do that and try to grab people’s attention through these automated and unaccountable means.
00:12:51 - Speaker 1: Yeah, spam is also notable because there was a very powerful technological problem, basically people sending out zillions of emails for essentially free. There’s also a very powerful technical solution, probability-based spam filtering. And so there was this battle for a long time. I think we could say that eventually the spam filtering won because they have access to all the leverage you get with electronic data that the attackers have. But yeah, that wasn’t a preordained conclusion, and I don’t necessarily think we should count on that being the case with privacy.
00:13:19 - Speaker 2: And looking at the things going on in the technology industry there, we have something like GDPR is a pretty big deal in Europe, that’s been in force for a few years and then they’ve, I think, are looking at doing more to strengthen it, sort of trying to give people more control over their personal data, more insights over what’s being captured and when it’s deleted and that sort of thing.
Another notable trend in recent time is products, privacy focused products that have, if not broken into the mainstream necessarily have been pretty successful. There’s something like the Brave web browser that has built in ad blocking and essentially makes a privacy oriented pitch over using something like Chrome. And they just posted recently, they had 20 million users, which is a pretty good number.
DuckDuckGo is a search engine that in many ways you could say is worse than Google in terms of results, but it’s privacy protecting and so that one selling point seems to be enough for quite a lot of people to use it. And there’s a long list of others of these protonmail for email, fathom, which we use for analytics on our website.
There’s this whole class of messaging apps like Signal and Telegram that have really got a lot of traction. And it’s interesting to me, almost all of these that I just named, they’re basically worse in every way than whatever they’re competing with.
Not always, I like Fathom better than the Google Analytics, and I think Brave is nicely made mostly because it’s just kind of a fork of Chrome. But in many cases they are about the same or worse, you know, using Telegram to communicate with someone versus WhatsApp or SMS for example, it’s basically the same kind of experience, but that one benefit of some kind of privacy protection or some kind of assurance that privacy is something that people who create the product care about is enough to get a lot of people to use it.
I’m curious how you think about that. Are you motivated to use products that are privacy protecting versus trading that off against other things? Do you think that those kinds of products will always essentially just be a niche for the few people that care enough about it, or do you think there’s a future where that kind of focus would be something more mainstream?
00:15:26 - Speaker 1: Yeah, well, that’s not an easy question, Adam. Yeah, so I’d say first that I definitely use some of these privacy focused products. Two examples that I would give, one is Safari, which I use because it’s faster and I think it has better privacy capabilities than Chrome. Another is D. Go, which for a long time I’ve used almost exclusively instead of Google. I find that it works great and has a much better privacy story. So for me, on the margin, I’m definitely inclined to look at the privacy angle and especially if things are comparable or if for some reason I care a lot about the privacy of that data, I will make the move.
And I’m glad to see that we have these offerings and people can make choices like that.
To my mind, the bigger deal though is the overall dynamics of the industry and what a lot of users end up choosing. And yes, it’s great if we have options for particularly privacy conscious or privacy sensitive people. And again, I’m very thankful for that.
But if you think about this third reason that I mentioned a while ago, data concentration in large powerful entities, that’s really determined by what most people do, right? So for that reason, I’m very interested in the overall dynamics of the industry and our governments and how those things interact.
And there I would say that it’s perhaps a more discouraging situation.
I think there’s things we can do and there’s still passed out of this, but I think it’d be very easy to imagine a world where governments just have access to all our data, which by the way, you did a good survey there of some of the current privacy focused products, but a huge deal is access to TLS. And it’s something we take for granted that you can go to HTTPS website, which basically our websites are now, and at least that data won’t be accessible to people online unless there’s some exceptional circumstances. And we take that for granted, but in my opinion, that was not a given. At one point, it’s my understanding that this public key cryptography was not a generally accessible technology. It was somewhat controlled by the government. And with Netscape and commerce moving to the internet, again, I’m not sure exactly how the story played out, but they eventually convinced the government to allow us to use that to export it outside the country, and so on. But I could absolutely imagine a world where that was not the case. Like if you can imagine, for example, that 9/11 happened before that stuff got out widely in web browsers, the government might have just said, you know what, we can’t have people communicating securely at all. This is now banned. And then that would have been a very path dependent thing where we might have stayed in that situation.
00:17:45 - Speaker 2: Yeah, I’m old enough that I lived through that process. I was in the computer world and in the industry for some of that.
The clipper chip was actually a US government initiative at the time to create cryptography that had a built in backdoor for government agencies to open. And there was also things like, yeah, up until the 90s, um get in trouble here recounting this all from memory 20 years on, someone’s going to fact check me, hopefully you will, but at least the way I remember it was in the early 90s, it was the case that cryptography technologies were very rare, so I think it was up to 40 bit keys were allowed, but of course that’s low enough that you can reasonably, I think even back then, you can brute force crack them, so it wouldn’t really be that viable for something like online commerce.
And then I think um the, I don’t know what you want to call it, technology, cryptography, folks, enthusiasts, slash experts uh on the then sort of growing networked world were really, you know, arguing for why this technology could be really enabling and there’s good reason for governments in the US government in particular.
To be cautious and treated as a weapon because, you know, in many ways you can point to the Allies winning World War II, that was basically done through science and one piece of that was maybe the atom bomb and the Manhattan Project, but less dramatically was the cryptography story, right? The Alan Turing and the Enigma machine. And the fact that one side could read the messages, another side couldn’t, and it was that asymmetric key cryptography, that was the technology that essentially allowed the Allied communications to stay stay secret. So thinking of that as effectively having won that war, the greatest, certainly most destructive conflict in human history, and then being really cautious about who has access to that seemed quite reasonable even, I don’t know, 40, 50 years on from said war. And yet, the things that potentially enabled were so great.
And of course now we live in that time where as you said, SSL and that little lock icon that you see in your web browser makes it possible to have this huge, I mean, what would the world be like without online commerce in a pandemic age, right? Just to name one thing. So, I’m glad we won that, or let’s say the people on the side of more access to encryption and privacy protecting technologies won that fight, but as you said, it definitely wasn’t guaranteed.
Now, coming to Muse specifically, historically, we’ve had everything, in fact, we even say this when you first fire up the app or first log in, which is we basically say everything stored on your device locally, it’s private. That’s important to quite a lot of people, sometimes for very practical reasons. There are, for example, an attorney that has case notes in there, but in many cases just coming back to that sense of privacy allows you to be freer and if you feel like you can write stuff down and not feel like the NSA is looking over your shoulder, um, and that’s just a better state to be in creatively. But now we’re starting to move into much requested features that allow us to not be essentially in the iPad silo. So right now we already have a browser extension and an iPhone app. There’s a very simple capture into the iPad, but eventually we would like to imagine that you spans all your devices, that’s wherever you need it to be. So that’s the multi-device side of things, then it gets even more interesting with the multi-user side. Of course, we know that all these collaborative tools like Google Docs and GitHub and Notion and FigMA have really supercharged our work in the modern world and certainly for remote teams like ours. Now Muse has a different use case, which is more about developing ideas than making these end work products. So what role exactly the collaboration and sharing side will play for us is not fully known yet, we’ll explore that as we build this stuff out. But here we are confronting this thing where we on the team value privacy. We know that many of our users and customers value that a lot. It makes you feel freer, but then we also know that being able to access your stuff from all your devices and share things with colleagues and friends is immensely powerful. So you’ve been leading the charge a little bit on thinking about the particularly the technology sides of that trade-off. Where are we at right now?
00:21:55 - Speaker 1: Well, let me start with the way this is done in almost all apps today, note taking productivity style apps.
There is a central server that’s run by the tool provider and that stores all of the users' data in a way that’s accessible to that company.
So you might have a table that’s like documents and has the data for all documents for all users in it, and then. When you fire up your app on a device, it talks to the server and says, Give me the latest data on this document, and then it renders it on that device. And then if you share a document to another user, that just becomes metadata in the database that says for this document row, this user ID can access it. So when that user’s device requests a document, they can have authorization to get that data.
00:22:41 - Speaker 2: Right, so when I make a new blank document in notion or Google Docs, type in 3 characters, those 3 characters go into a record in a database somewhere in Google’s servers, and that the cloud, as I believe they call it, is precisely what makes it so easy to share because if I want to send this to someone else. That I need to take it off my device and put it on theirs. It’s already in Google servers. Google essentially has ownership of that. I’m just accessing it through this client or front end and so giving one other person or some number of other people access through their client or front end is a relatively straightforward operation, right?
00:23:14 - Speaker 1: And notably with these modern cloud-based tools like Google Docs and Notion, you typically don’t even have the data locally.
So if you type in this document, save, exit, and later you’re off the internet and you want to open up your document, well too bad the data is not there. So that’s the standard approach, and we remain open to doing that for use. It has a lot of benefits in terms of relative ease of implementation, of course, providing all of the features that you want, as well as things like backups in the case that the user loses all their devices or something.
But we’re also very interested in exploring a second way where you get the benefits that we’ve associated with cloud-based collaboration, being able to access your data from any device, being able to add collaborators and collaborate in real time, all of those things without the tool provider in this case Muse being able to read your data.
So the way I pitch this is like signal meets Google Docs. You have the security model of signal where data is and then encrypted and you’re talking to your collaborators and only you and they can read that data, but you have the rich documents, multimedia collaboration, multi-device synchronization that you would associate with Google Docs, and that’s quite a hard technology and product problem, but we are looking into it.
00:24:26 - Speaker 2: You and I were both co-authors on a paper called Local First Software.
And this was much more research outward thinking technology of sort of removing the cloud from the equation completely. Which is not what you were just describing there, but it does have some of these same elements of trying to make it so that it’s less about what’s out on these servers and more about what’s on the individual users' devices. I think we touched on encryption briefly, but I think, I mean, as we described in that paper, we don’t necessarily think that building a truly 100% local first application is really in reach for certainly for a small team like ours today. So it’s really parts of that we do think are achievable and other parts maybe are still a little bit more, we’re waiting for the technology to get good enough.
00:25:13 - Speaker 1: Yeah, and I might say that I still think local first, at least as I understand it is possible for us.
The thing that is less valuable and interesting is pure peer to peer. So there are some apps or technologies where if you and I are collaborating, we’re sending packets back and forth directly to each other, and there’s no server interposed, which has obvious potential security benefits, but it also gives you a certain resiliency.
Against DDOS and other issues with a central server.
And for me, having servers on the internet is not necessarily that big of a deal. And in many cases, there’s just no way around it.
For example, you need to talk to a central server to get apps onto your iPad because Apple requires it. And for me, the bigger deals are that central server being able to read all your data and you not being able to read and write your data if you’re not connected to the central server.
So the world that I imagine is one where you potentially have a server or even servers, but the servers are more like symmetric nodes, you know, they behave more like any other node like your phone or your tablet, and it’s not so much of a special superpowered case.
00:26:19 - Speaker 2: And then on the encryption side, you, you reference signal and I think one of the places they’ve been very influential is, I wanna say they started as this open whisper systems sort of security consultancy or something like that, and they not only made this secure messaging app, but they wrote a lot of articles and sort of publicized their approach, and that was something that’s then been picked up by others, including WhatsApp and Telegram and I think the hard part in this is usually the key management, right? So this is asymmetric key cryptography basically relies on the person having this block of data someplace and no one else having that data, but that’s tricky because that person has to keep track of it. Maybe it’s a bit like a physical key. You lose a physical key to your house, you potentially go to a locksmith and they can crack it open for you the way that these digital keys work, if you lose it, that’s it, you just can’t get access to that data again.
00:27:15 - Speaker 1: Yeah, so there’s a couple of things going on with an app like signal. You might break it up into key management and cryptography. So on the cryptography side, this is like, OK, assume magically for a second that everyone has keys and we know who has which keys, then you need an algorithm for using those keys to encrypt the data. And there’s been a lot of work on that in industry and in academia. It has been, I think, quite focused on the messaging use case.
And one of the things that we’re excited to do with Muse if we do pursue this end to end encryption path is making it more general case. Like, let’s encrypt data structures and documents and not make something that’s very specific to messaging.
Also, in the case of Signal, I do think they make some specific trade-offs that are more appropriate for the grade of security, if you will, they’re looking for like signal needs to be resistant to powerful nation state actors. And in order to do that, you need to make some specific trade-offs that maybe wouldn’t be appropriate for a productivity tool.
But anyways, yes, that’s what happens on the crypto side, but then the key management side.
That’s a whole big challenge, and many people will tell you that key management is actually the harder of those two issues, and there are different ways to do this.
There’s the fully distributed web of trust type model where you build up a model of who has which keys based on a series of ideally. In-person interactions. So, you know, you and I might meet in real life, we would exchange keys, and we might also exchange information about other people that we have, and we would sign that information and then over time, you kind of accrete up this web of process where the name comes from. That’s kind of the most distributed, but least practical model.
The most practical, but least secure model is just, you ask the server who has what keys, and that’s very convenient. You get all the benefits of a centralized server telling you exactly the right answer. The downside is the server can just lie and say, this is Adam’s key, when in fact, it’s just a server’s key and it’s read all atoms data.
And the thing that I’m most intrigued by and that we’re exploring a little bit with views is more of a middle ground where you get some of the benefits from the centralized registry, but you also get some of the benefits from direct or decentralized verification, especially where you need it. So one example of this in Signal, I think they have this set up where you can look up people by phone number and the signal will essentially send your data to their devices, but you can also, when you’re next to someone, you can verify each other’s QR codes and then that lets you know that you, you in fact verified. This person, I think it gives you a stronger level of security. So I think there’s more things we can do along that vein.
Another example from the Zoom white paper, you know, Zoom is working on and then encryption. They said for a long time they have it, they don’t really have it as we understand and then encryption, because they’ve had this key management problem where everything was encrypted, like TLS is encrypted, but they control and administer all the keys, so they could just impersonate people if they want to. But they’re trying to move to a proper model where you can, in fact, verify people if you desire to. And one of the things that they’ll try is, on your Zoom video, you’ll have a little code. And when you’re on the call, you can say to the person on the other side, you know, the security code is ABC 123, and that’s very hard to impersonate in real time, obviously. So if you’re correctly verifying the code on each side, you know, OK, this is not being tampered with. And there are other techniques that they’re exploring too. But this basic idea of you kind of mix the benefits of a central registry for keys with more distributed ad hoc verification where you need it.
00:30:14 - Speaker 2: I think we’ll be looking for the sweet spot there between trying to give some reasonable privacy protections, but not having a very difficult or very technical.
And demanding, for example, key management system. That’s probably exactly what Zoom’s grappling with right now.
I think of the canonical example of inaccessible as something like PGP and I’ve used the GNU PG for a number of years. I’m pretty good with it. I’m handy at the command line and even so, it’s just very easy to mess it up, get the wrong key, delete the key.
Encrypt the wrong thing, it’s just very, very unforgiving even for a technical user and so way, way out of reach for kind of more casual use.
And SSL is a great example, as you mentioned, HTBS websites is a great example where we did manage to find a way that was a middle ground of real solid encryption that really does make a difference in terms of the things that enables, but it’s very accessible. It does not require some kind of deep technical cryptography, key management thing in order to get the benefits of it.
00:31:24 - Speaker 1: The last thing I might say on key management and crypto and user interface trade-offs and stuff is that to us it’s very important that we design this all holistically.
There’s a lot of research, for example, on cryptography schemes that assume key management or on collaboration product designs that assume the server can just read all the data and In order for this to work with Muse, we need all of the product design, the collaboration technology, the key management, the cryptography, the mental model of how people think about documents that all need to line up. And I think that’s why this has proved quite challenging for us. It’s not something that I think very many people have grappled with, but I’m optimistic that if we can get all those things to line up in the right way, it’ll be a very powerful combination.
00:32:06 - Speaker 2: Yeah, I’m potentially excited for that, finding a new set of trade-offs. I feel like most tech industry products are essentially binary. You have either Totally local old school program saves on your hard drive, great, you know, no one else has it, but you just don’t have any of those sharing and collaboration features or you have the fully in the cloud thing, which is just so incredibly useful, and yet you just know, OK, I’m just giving up every single keystroke I type into this. I know that Google engineers and Google machine learning algorithms and the NSA and anyone in the future that may acquire this data for essentially an infinite amount of time. I’m just giving up and saying they have full access. It’s the trust us model, vendor model, and I’m excited that we can potentially find a different set of trade-offs, a different sweet spot, a different place to be that isn’t one of those two extremes.
Exactly, yeah. And normally when the topic of privacy comes up in the context of the tech industry generally, one of the key things is people are talking about my data, and I think we’ve almost been entirely talking about what I would call content and so maybe content privacy. I make a document, I write a note, I record a video, that’s my content and I want to know that I am the only person, me and people I have chosen to share are the only people that have access to that.
But the other category and maybe even a more common one to be in these discussions is more call it analytics or you can call it telemetry data, and it’s a really interesting question when you do frame it as data ownership, if there’s something like a motion sensor, for example, a smart home kind of motion sensor that is logging entry and exit to a location. If I put that in my home and I’m logging that into a computer I control, it feels pretty clear to me that that data about the comings and goings in my home is mine. But if it’s in another building, say a public building, and I walk in and my arrival is recorded, and of course, you know, cameras, they’re your faces, you know, in the data, is that mine? Well, probably not, but then I kind of, that is part of the discussion. It’s like, well, wait a minute, you’re kind of recording me or tracking me. I feel like that belongs to me somehow or you’re invading my privacy and it’s, but then you’re, you’re in that other person’s building, you know, the building is a public place or not something that belongs to you, so it’s a funny thing to discuss in a way.
00:34:25 - Speaker 1: Yeah, OK, so to unpack that, I think there’s actually 3 different things going on there.
One is the classic analytics data, and the example that I might give for that is, say you have a web app and it’s indicating how often this user uses certain features, like, do you use the export feature? Do you use the print a PDF feature, things like that. That’s what I would think of as analytics.
Then there’s the PII, the personally identifying information. This is information that ties some abstract user to you as a real person. So typically emails, as IN names, plus addresses, phone numbers, these are things that take an abstract account, you know, user ID, whatever, and tie them to Adam Wiggins. And then there’s this third issue of data in public places. I think that’s another huge challenge and to my mind, those are 3 different quite hard issues.
00:35:11 - Speaker 2: And notably the, you know, I mentioned GDPR in passing before cookie warnings are this huge thing in Europe where basically almost any site you go to pops up this morning and it’s kind of regulatory things gone wrong where they were trying, I think quite reasonably to say like if a site is going to track you in some way that they should seek your consent.
But now, essentially, most websites just do a kind of blanket consent seeking because most websites set some kind of cookie, but the detail of it actually is that it only matters exactly as you said, if it’s personally identifying some way, if it’s it’s sort of tracking you around.
So there are a lot of cookies that you might set that are more kind of anonymous or more kind of general telemetry, but are not about me specifically.
And so for example, the Muse website does have analytics, you can see that if you do view source, but it does not have a cookie warning and that’s because the type of analytics that we use is essentially anonymous. It doesn’t track you around, it doesn’t connect to what you’re doing somewhere else, it just counts essentially.
Gets to our site, which is very useful to have. It’s nice to know, especially with refers or whatever, it’s, you know, if there’s suddenly 1000 new users pop in, wow, where these folks come from? Oh, I see we got linked on some high profile site. We can see that in our analytics. It’s very useful to have for our business.
00:36:28 - Speaker 1: Yeah, and GDPR by the way, is I think a good example of the importance of systems thinking. I think the failure of that legislation, and I, I mean, that perhaps sounds blunt, but I think that’s the correct assessment was due to not thinking about it as a systems problem where you have to deal with the realities of what are people actually going to do, what our business actually gonna do, what are the capabilities or non-capabilities of the government, things like that.
I do think that the differentiation between analytics and PII is important and good.
To my mind, those are just very different beasts as well as being different versus content. This is something that we’ve kind of caught up in discussions with users and, you know, sometimes a privacy fundamentalist who says, you know, everyone has a right to privacy, no data should be ever transmitted over the network without my explicit permission. You know, maybe, but the reality is, it’s hugely valuable information that for most people has relatively low cost in terms of their privacy cost, if you will. So it’s not surprising that people end up sort of making that exchange. It’s much easier to, and therefore cheaper to offer software if you have access to this analytics data, whereas it’s a relatively low cost to individuals in terms of their privacy. I do think the PII and content stuff is much more tricky, and PII is also slippery because you can collect the data that’s like, quote unquote anonymous, but it’s actually very easy to deanonymize if you collect enough of their screen size OS version, browser version, yeah, the browser.
00:37:55 - Speaker 2: Fingerprinting stuff which is for a little while I was following kind of the Tor browser world of things, which is another one of these.
Well, that’s even a step further on the privacy focused products, I think.
It’s very interesting stuff that that team is doing trying to make truly anonymous web browsing. And one of the things they have had to face up against is the browser fingerprinting, which is exactly what you said, which is sort of knowing the exact resolution of the screen and you know what version of the operating system they’re on, you tie that together with some other data, some time stamps and things, you can work backwards from there to pinpoint someone, at least that it’s the same person repeatedly surprisingly well.
00:38:34 - Speaker 1: Yeah, I also think that the PII information can be separated from contact information and used to something interesting here.
We do require an email to use the app because we need to be able to communicate with you for various reasons.
But there’s no requirement that the email is like your only email or your canonical. email that it matches any other emails. So a lot of people just put in their default personal email, but a lot of people will create an email that’s specific to muse, kind of like a muse specific inbox, or they’ll just use like a burner email that has no connection, no identifiable connection to their real identity. And again, this is an example of where you can tease these things apart. You can separate out PII from ability to contact someone from analytics information from content information.
00:39:16 - Speaker 2: And speaking to use on the kind of analytics side of things, I mentioned our website, but the product itself, the iPad does report back analytics on usage, and that is for improving the product.
Huge one, for example, is crash reports. So when iPad OS 14 came out a little while back. We embarrassed to say had a very rocky patch for a couple of weeks of crashchiness, and that was partially changes in the OS, partially with problems on our side. We eventually sorted it all out, I’m happy to say, but if we didn’t have reliable crash reports to be able to see, first of all, that there is a problem, and secondly, to try to hone in on what that problem is, and then once we’ve fixed it, you know, we roll out a new release, has the rate of this particular kind of crash gone down. That has a big impact for our ability to not be blind or trying to improve the product.
But it also includes things like just features. So a little while back, Leonard was redesigning the action bar, which is what comes on the screen when you tap a blank space and you get the little couple of buttons down at the bottom, and we were really pondering whether the undo redo were worth including because they took up a lot of space and most people use the gesture or you can use the keyboard shortcut in the case where you have the keyboard. And so we thought, well, is it really worth the screen real estate this takes up and we could actually go ask this question of what percentage of people. are using the buttons or what percentage of the time versus using a gesture or keyboard shortcut, and I forget where it came out. I think it was like 15% or something like that, 10 or 15% of undos were from the action bar button, and I may say, you know, that’s just enough. I think it’s worth keeping. We’ll make them a little smaller maybe to represent that so we can make product choices.
And of course there is ways to, I don’t go ask people, of course there’s, you know, you should be doing that and occasionally we get to observe people using the product and so on. But the ability to go and get real data about those kinds of questions, they really help us to improve the product and so it can become a better product faster.
00:41:17 - Speaker 1: Yeah, totally. And again, this seems to me like an imminently reasonable and good trade-off for both sides involved. If we had to develop use without access to this information, it would be much, much harder, and it would be worse product.
You know, maybe it would cost twice as much or maybe be half as good. But is that worth this very marginal amount of privacy and In terms of analytics information, I don’t think so.
Now, I’ve discussed a lot of this in terms of favoring a sense of trade-offs and opt in decisions over fundamentalism in any direction. I do think a huge issue with privacy again in the electronic realm is it’s very hard to understand what’s going on. You know, Muse, I like to think we try to be a good actor, we try to do reasonable things and nothing nefarious, but it’s basically very easy to do really bad stuff in terms of privacy, especially on non-web platforms. And to my mind, that’s actually a big technology gap. You know, how do we empower users to actually make these trade-offs instead of just having to throw up their hands. And there have been some movement on this. I think Apple is coming out with some additional required information from developers soon about what information you collect and how you use it and so on, and that’s certainly a step, but I suspect there’s much, much more to do here.
00:42:30 - Speaker 2: In general, I think I and most people would point to Apple as one of the best actors in terms of moving the ball forward on.
What users can expect privacy wise, and I think this stems out from iOS from the beginning was a very securely designed operating system, much more than the classic desktop and server operating systems that that came before, and they’ve continued to do that.
I think of something like the on-screen notifier when an app accesses your clipboard, when they rolled that out a little while back, then suddenly you saw all these slightly shady things that many apps were doing, including, I think. TikTok pulling from the clipboard on every single keystroke, for example, and you need that, you need a clipboard that can move between apps and apps that are going to do interesting things need to access the clipboard, but finding ways to try to surface that so that people who are not acting in good faith, not using the operating systems capabilities for the benefit of the users, but for their own benefit, or at the very least just being deceitful perhaps about what the user expectations are versus what they’re actually doing. So certainly got to give props to Apple on that. They’re not perfect, but they’re definitely one of the best players, I think in our industry, certainly at that scale.
00:43:40 - Speaker 1: Oh yeah, I mean, lots of good things going on there and it helps when you have control over the platform because you can manage access to things like the camera roll and the microphone and so on.
I guess my point here is that I think there’s just a huge gap remaining, especially as you look at the meat of the data and the network connections.
You know, apps, they can basically talk to anywhere on the internet they want, and they can do whatever they want with data that you input into the app. So a good example of this is perhaps you have an app for composing a message. The app can, and in fact, many of them do, just send every keystroke. Type, regardless of whether you hit send or when you hit send. So, you might not realize it, but when you’re drafting a message, whoever is running this app has a copy of that draft forever, even if you layer decide, oh, that’s actually not a good idea to send that to backspace. A, that’s really questionable, but also it’s really tough to manage against. Like, what would the interface be that prevents apps from doing that, or even alerts users that it’s happening? It’s hard.
00:44:32 - Speaker 2: Yeah, I have a tendency to type messages in progress if they’re anything more significant than just a, you know, a few words for a quick reply into my local text editor.
Very programmer type thing to do, but first of all, I like the better editing capabilities, but it’s also the sense of knowing that it’s not a cloud connected thing, that it’s truly, you know, when I hit that close button, maybe it prompts me of whether I want to save but you compare to the cloud where anything you ever put into it is just always saved instantly, which by and large is a huge win for users, like unsaved documents or things you forgot to save and then your computer crashes and.
And whatever, that has been a massive source of user frustration for a very long time and this modern era of mobile apps and cloud that don’t really have a concept of needing to save things and just everything you type in has ever saved is mostly a big user experience when, but for me personally, yeah, when I’m composing a message, I like to know that it’s in this ephemeral place and that if I decide, ah, this isn’t quite right or whatever, I just delete it and I know it never went anywhere.
00:45:36 - Speaker 1: Yeah, fair, that makes a lot of sense.
Oh and by the way, on mobile, another huge way that the mobile platforms achieve security is just banning huge categories of stuff, especially arbitrary code execution, plug-ins, extensions, and these are capabilities on the one hand, are incredibly powerful.
You could argue that they’re basically required for getting a lot of the power user workflows that you see on desktops, but they would be super gnarly to sandbox and To be clear, I’m not saying that that’s a wrong decision or that Apple or anyone else hasn’t done enough or that they’re making bad calls here. It’s more to point out that I think it’s an incredibly important research problem. Is there a way to get the benefits of third party plug-ins and security at the same time? Right now it’s very much one or the other. In the same way that is it possible to get collaboration and and then encryption at the same time, you know, it’s an open research question to see if you can figure out how to do that.
00:46:25 - Speaker 2: A little earlier you mentioned the term privacy fundamentalism.
And I like this concept of trying to just better understand how much does privacy matter to people, how much does it really matter? And for us, you know, from a business perspective, we can sit here and in fact we do talk about big philosophical things that we believe in regarding privacy and what the technology industry should do and the things our society are going to be grappling with having to do with this, but we’re a small business, we need to do things that, you know.
Makes sense practically for us in places we invest effort, time, energy, money, or places, you know, that is zero sum. We could be building out other features and if we’re looking into and then encryption to make a signal style thing for creative professionals to share their little notebooks, you know, that comes at the expense of other things.
How important is it really to our potential customers? And one of the pieces of prior I was just kind of looking at when we’re thinking about this episode was essentially a survey of people’s attitudes about privacy.
And in this case, I think it was like an internet of things kind of category, so I think this is in the context of smart home or something like that. But I like that they broke things out into three categories in terms of people’s attitudes.
There was the privacy fundamentalists, which you described, which were people who would trade off almost anything for the privacy aspect of things. And then you had another category which was also a small group, but still a significant one which they called privacy and Concerned, they just said, who cares? Nothing I do is that interesting. Google has all my data anyways, what difference does it make? I don’t care.
But then the biggest category by far is what they call the privacy pragmatist. That’s certainly the category I put myself in, which is this is something I care about. I think it is important. It has impacted my life in direct ways in the past, and I do see it as a big and important topic for our industry, for our society going forward, but I’m not willing to trade off everything for it. There’s a bunch of other things that I care about in terms of the utility of products that I’m using. And so finding that balance, I think, at least when I put it in this frame, I’m going to make a wager that I think a lot of users, both current and future are privacy pragmatists.
00:48:34 - Speaker 1: Yeah, I think that’s right, and I agree on privacy fundamentalism, I think that can actually mean two things. I think it can mean that one as an individual has very high standards for privacy, and I think that’s totally fine for an individual to say, and I think for some people, it’s absolutely the right decision.
For example, if you are acting as an informant, if you’re doing something that the government doesn’t like, so on and so forth, right? That’s the correct trade-off to make. The thing that I am not as sympathetic towards is the sense of privacy fundamentalism that sort of the entire system should be subject to it.
And this is where I come back to this idea of privacy as a fundamental human right. That sounds very appealing. But on the flip side, it’s saying that no, you should not have the ability to choose to be a privacy pragmatist. You shouldn’t be able to use software that takes a different set of trade-offs with respect to privacy. And that’s something that I’m not very sympathetic to.
And I would furthermore say that I also think it’s valid that you want to live in a society where many or all people choose to be privacy. Fundamentalists or choose to have very strong standards for privacy, but I think we need to recognize that’s an enormous amount of work. It will cost many, many millions of dollars to develop and operate that software, and it will require perhaps trade-offs in terms of our day to day experience with the software. And if you kind of ignore that cost, you’re fooling yourself and it’s being intellectually dishonest, you’re gonna end up not achieving that. One other possible angle on privacy fundamentalism is again going back to governments, there’s a possibility that a loss of privacy is a one way, very destructive ratchet, and that for that reason, you want to be very careful. This is the sense that once a government has access to data, they might be extremely reluctant to forego that grip. And over time, they’re going to tend to get access to more and more through various means. And if you see the endgame as being bad, which some people do and some people don’t, but if you see that endgame as being bad, then you’ll want to resist by basically any means possible, the progression of that ratchet.
00:50:37 - Speaker 2: Well, talking about governments makes me want to recount my personal journey into thinking about privacy in a broad way. I think for a lot of Americans, it was the case that the Edward Snowden incident, which revealed how much kind of digital surveillance the US government was doing on its citizens was a bit of a wake up call.
Now, for me, I think I’d always had the vague sense that, you know, this is something important and I know digital technology is going to change the game, but I don’t think I’d given a deep thought and by just a coincidence of life, this all was sort of unfolding right as I was moving to Germany.
And I watched the startup I was working for at the time in Berlin, they just organized a little outing basically to see this documentary Citizenfour in the theater, which was a very interesting experience where they followed Edward Snowden around the camera, and this was You know, before the news had broken, essentially, and so you see all that unfold, and you see the crazy lengths he goes to, you know, the tails Lennox distribution and putting a blanket over his head, uh, is actually a very reasonable security precaution when he’s typing in his password, that sort of thing, a very interesting film.
But that in turn led to me having kind of a lot of conversations with my colleagues there, many of whom grew up in Germany, and for them it was very much in the recent past, the East German Stasi, which was kind of a secret police that had, I think, at least as far as we know, is the most extensive government monitoring of its citizens to date.
Some crazy thing like 1/3 of the entire East German population was a Stasian farmer. And when these records were revealed when the Berlin Wall came down in 1989, and these records were revealed to the population and people realized how much had been tracked and largely using at the time the new technologies of things like small microphones and wiretaps and, you know, recording things, tape and stuff like that, just the extent of it just shocked people. They had no idea that such a thing could be possible.
Actually another great film worth checking out a fiction, but I think captures this well. It’s called The Life of Others, German film that sort of depicts a fictional Stasi officer and what they’re doing and they’re monitoring.
But yes, I spoke to these folks who, this stuff is in their living memory. They grew up when this was happening, right? This is only now 31 years ago at the time, 24 years ago. And they said, you know, we know what it looks like. Maybe I had almost a little bit of an innocence, you might say, insofar as being this American where I guess I basically just feel like most of the time, you know, government can be bureaucratic, it can sometimes be incompetent, but ultimately, most of the people that work in government and the systems that are in place are basically there to serve the citizenry. And yeah, there’s a lot of trade-offs about law enforcement getting access to wiretaps and stuff like that, but ultimately they want to do that to catch the bad guys, keep us safe, all that kind of stuff. And speaking to the German folks where they said, you know, no, we’ve seen what it looks like to have a society where government so heavily monitors its citizens in the name of protecting that society, right? Everyone that worked. You know, in this state surveillance apparatus believed they were doing something really good. They were keeping their home safe. And I’m not saying I necessarily converted to seeing the world that way. I do see it as a series of trade-offs, and yet that was a powerful experience for me, and I think has influenced strongly how I see this topic as it unfolds in the technology world.
00:54:07 - Speaker 1: Yeah, absolutely, and I think it shows how dangerous it might be to just assume or hope that everything will go well if we concentrate an enormous amount of data in a very legible way in one or a few powerful entities.
00:54:23 - Speaker 2: Yeah, Silicon Valley is maybe already grappled with this a little bit, which is you have a bunch of young optimistic people building powerful new communications and other kinds of digital technology, and they tend to think about the positive case, you know, I think people who get into tech tend to be optimists, they tend to think of technology as a force for good, and they’re not thinking about the ways that it can cause harm. It’s a, you know, technology is neutral and can be used as a weapon, can be used for harm, can Decay the things that a society holds dear, and I don’t think that’s a reason to fear it or to kind of have a knee jerk reaction, but I do think there is a clear-eyed sense of, OK, as we open up brand new ways to do all kinds of things with our information life thanks to these digital technologies, what’s that going to mean and not to say that we can fully know or fully predict what the impact of this stuff is, but I think being mindful and having some caution as we go. That certainly goes for you and I who work on new products where we’re trying to bring new capabilities into people’s lives. What are the risks, what are the downsides and certainly the privacy product issues that we’re grappling with right now are very much in that category for me, for sure. Well, if any of our listeners out there have feedback, for example, we’d love to hear how you think about privacy and digital tools, go ahead and reach out to us at @museapphq on Twitter, or you can email us at hello@museapp.com. We always like to hear your comments and ideas for future episodes. And Mark, I certainly hope you’ll keep us in the loop about how you’re thinking about these trade-offs on the technology side, on the product side, and on the philosophical side as we continue to explore what we can do with the collaboration and multi-device capabilities of Muse.
00:56:11 - Speaker 1: Absolutely, I’m really looking forward to this work.
00:56:14 - Speaker 2: All right, have a good one.
00:56:15 - Speaker 1: You too, Adam.