← All episodes
Metamuse Episode 1 — March 24, 2020

Tool switching

Muse has a modeless interface with no onscreen toolbars. Mark and Adam talk about the long research journey that led us here.

Episode notes

Transcript

00:00:00 - Mark McGranaghan: Uh, so I, I imagine something like that from you where you can pick, you know, your favorite black ink, your favorite highlighter, your favorite accent pen, put them in your little toolbox, and you have this small, very curated palette that you can swipe in and out when you’re actively working on a document, and you’re not confronted with like the Photoshop style 200 buttons, most of, most of which you don’t know what they do.

00:00:33 - Adam Wiggins: Hello and welcome to Meta Muse. Use the software for your iPad that helps you with ideation and problem solving. But this podcast isn’t about Muse the product, it’s about Muse the company and the small team behind it. My name is Adam Wiggins and I’m here today with my colleague, Mark McGranahan. How are you doing, Mark?

00:00:52 - Mark McGranaghan: Doing all right. You know, it’s uh interesting times over here in Seattle with the virus, but otherwise doing pretty well.

00:00:56 - Adam Wiggins: This is a good moment to be on an all remote team, right?

00:00:59 - Mark McGranaghan: Indeed.

00:01:00 - Adam Wiggins: So the topic we wanted to talk a little bit about today is tool switching.

And so this is the idea that if you take your stylus, your Apple pencil, and you touch to the screen, what happens? You know, what color is it inking? Is it erasing? Is it something else? What color is the ink? Is it something else totally different, like a a lasso or a scissor tool? And this is a a deeper topic than it might seem. Uh, because it comes to some values that I think Muse has or that we try to fulfill some principles, perhaps you could say in our design, including things about modelessness and things about sort of on-screen Chrome. But it also touches maybe on our journey from being a prototype in a research lab through to a sort of an MVP of beta and hopefully on our way to a publicly released, uh, commercial product that anyone can use.

00:01:51 - Mark McGranaghan: Yeah, it it’s been a really challenging problem, much more so than I thought it would be coming in. Uh, one does not simply ink on the iPad, it seems.

00:02:01 - Adam Wiggins: Indeed, yeah. And there’s a whole set of technical challenges that maybe one of these days we can get Julia on here to talk about would be great. Um, but yeah, maybe we can go back to the beginning. Can you, can you frame up the problem for us a little bit? What, what were we trying to accomplish? Uh, why, why not just sort of have a toolbar at the top, you tap on the thing like you would in Photoshop or any procreator or something to pick a tool and then off to the races.

00:02:24 - Mark McGranaghan: Yeah, so as a reminder just mechanically what we’re trying to do here is when you touch the uh Apple Stylist to the screen, do you get ink, do you get Eraser, what type of ink are you seeing, things like that.

And the very standard way to do this in iPad apps is you have a persistent toolbar, often at the top of the screen or some other palette, where if you want to erase, you tap the erase icon and if you want to red ink, you tap the red ink icon. If you want to highlight, tap the highlighter and so on. And that’s a sort of mode where that is persistent until you go back to the toolbar and tap it again.

Uh, so there are two main problems with the standard approach. One is that you have that toolbar in your face all the time, uh, which is a pretty big deal on the iPad. It’s a relatively small sized device and you want, uh, we want as much space as possible for your content and for your work, and to not always be looking at like Chrome and toolbars and buttons and other stuff that isn’t what you’re actually trying to think about and do deep work on. So that’s kind of the the chromeless goal. Uh, the other thing is modelessness. So a mode is um a property of an interface whereby when you go to do some physical action, The result depends on some hidden state of the app. So in this case, that that mode, that state is like what um inking button you have pressed in your palette toolbar or whatever. And the problem with that is that these toolbars, they tend to be off to the side of the device away from where you’re working, so you have to basically have your attention in two different places. It’s you’re looking and thinking about your, your work, the text that you’re highlighting, for example, but then you got to remember constantly what’s the actual thing that I’m currently working with. Uh, this is subtly different to, for example, if you have a physical highlighter. So you have a physical highlighter and you’re going to highlight like the highlighter is thicker, it’s bright yellow, it’s very obvious what you’re doing because you’re looking at the, your hand and your instrument and your work, which are all in the same place. But again, that’s not the case with a typical toolbar. Um, and so we wanted to try to find an interface that didn’t have this modeful property that wasn’t moded like this, uh, as well as it didn’t have, um, all this chrome in your face all the time.

00:04:33 - Adam Wiggins: And a great articulation of this uh modes concept is in the Humane interface by Jeff Roskin. And he talks about the, I think the really classic example there is the caps lock. This is just confounded many, many generations of computer users where when caps lock is on, different things happen when you press keys, specifically, you get the upper case rather than the lower case. And of course, this is really confounding for something like a password field where you can’t even see that feedback immediately. But even in a uh another case where you can see what you’re typing. You type a word or two and then you realize everything’s upper case because the caps lock indicator that being on or not, you either have to remember it, or you have to kind of look down and see an LED or some kind of indicator, and you tend not to do that because your attention isn’t there, your attention is on what you’re writing as it should be.

00:05:23 - Mark McGranaghan: Yep, this also points to a third issue with the standard moded interfaces, which is that you actually need to physically do the action of moving your hand away from your work to the toolbar and back again. And if you’re constantly switching between inking and erasing or different types of inks, that actually becomes quite troublesome.

00:05:41 - Adam Wiggins: So let’s go back in the story a little bit and kind of Work through the product or design problem.

So we started from this place of let’s, let’s do the Raskin thing and try to be modeless and also that we don’t want a bunch of junk on the screen or we want as little stuff on screen as possible, be focused on the user’s content, keep all the, keep all the space for your stuff and not for the applications, uh, administrative debris. And so, uh, back when we were working on this in a research context, which probably explain what that means a little bit, but Uh, we set out with this set of goals and, and how did we first approach that or what what were some of the first things that we tried to see if we could fulfill these, these goals while still letting you, of course, do lots of things with the stylus.

00:06:23 - Mark McGranaghan: Yeah, well, so I don’t think we, we fully knew what we were getting ourselves into.

Pretty early on we had these two goals. We don’t want to have any Chrome and we want it to be modeless, but if you If you do both of those things in any obvious way, you basically can only have one thing that the stylist does. um, so for a while our solution was you can only do black ink, that’s it, uh, which actually got us surprisingly far, um, but then we need to try some other things and, and then we did a whole uh litany of experiments.

We, we did try some standard toolbars and palettes. We tried to make them as small and minimal and nonobtrusive as possible. Uh, we tried using various uh quasi modes, which is a term that I want to introduce here. So, uh, a standard mode is when You kind of do an action to trigger the mode and then you go and do your work and then you go back to the, the mode switcher to switch it again, whereas a quasi mode is when there’s something that you’re basically holding down, it’s like when you use the shift key or the command key or holding down some other control button, you know, that sort of thing while you’re doing the action with your other hand, basically.

00:07:28 - Adam Wiggins: Yeah, so going back to the capslock example, Caps Lock and shift do the same thing. But the difference is that with shift, you are not likely to forget you’re in the mode because you’re physically holding the button down. And if you ever get confused about how to leave the mode, you basically just release, stop doing things, and you sort of go back to your default state, exactly.

00:07:47 - Mark McGranaghan: So, we, we tried all kinds of quasi modes, uh, we, we didn’t necessarily have a keyboard, which is the obvious place to invoke a quasi mode with some kind of control key, but I think we tried, um, Using a physical keyboard, which you can sometimes get with tablets. We tried pressing like the volume button on tablets, we tried putting your thumb over the camera so that it registers a black image. Um, we tried pressing on various special places on the screen like press in the bottom left corner if you’re a right-handed anchor. Um, so there was various experiments with quasi modes.

00:08:21 - Adam Wiggins: Also worth noting there that in many cases, so Muse runs on the iPad, but In the context of the research lab, we were building for a number of different tablet stylus platforms, including the Microsoft Surface, Google’s ChromoS, and, um, I think we might have even done something with Android at one point.

And so those actually platforms have different affordances or different hardware capabilities. So notably the Surface, for example, has this reversible stylus where the back is a quote eraser, which actually is really nice because again, You know, you, you have that physical reminder, just like your highlighter, um, example, you’re holding the thing in your hand in a reverse position. That’s clear, you can see it, you can feel it. And so you you flip the thing around to erase and flip it back. Fortunately, the iPad doesn’t have that. Uh, yeah, other, other platforms like ChromoS, for example, have a barrel button, all of that has certain restrictions tied into the operating system. So, we tried quite a lot of crazy stuff on this.

00:09:15 - Mark McGranaghan: Yeah, uh, we also tried some stuff just using the stylist differently.

So one experiment that you had done was using the stylist to write special symbols. You called them glyphs, and it was something like if you draw an X then it’s cut or something like that and if you draw a downward V then that’s paste. um, so we did that experiment.

Uh, we also tried. Um, using the stylus with different attitudes towards the tablet.

So typically when you’re using a stylus, it’s like pretty vertical with respect to the tablet, uh, but you can by holding a different way, you can make it almost uh parallel with the tablet, sort of like you’re doing a a pencil shading motion, uh, and there, there are sensors in most devices to detect that altitude. Um, so we actually use that angle to trigger different behaviors of the stylus.

00:10:06 - Adam Wiggins: Yeah, notably, that one was the one that we kind of found most promising. I think we published maybe our Muse design article included that as well as Yuli’s. Yuli gave a talk at a conference last year with that approach where basically when you would hold the stylus overhand, it would allow you to move cards. Uh, or resize them, but when you held it in the, uh, more the, the standard writing grip, that would give you ink. And I, I really loved that. It worked really well, uh, in a lot of ways. It was very intuitive and, uh, you know, different grips is something that comes naturally to humans and it was pretty hard to get confused between them.

But ultimately actually that one was. Killed by a technical challenge, which was the iPad, and I wouldn’t be surprised if other platforms have the same problem. I suspect it’s a hardware thing, but as you get close to the edge of the screen, for us, it was, I don’t know, 50 to 80 pixels, which maybe is, I don’t know, yeah, a couple centimeters. Uh, it would basically start to produce bad values.

We did a bunch of digging on this and filed some bugs and some other things, but ultimately and and saw that this is just behavior that’s systemwide, but I think no one else ran into it because who the hell cares about the exact, uh, you know, why is it critical to hold the pencil at a certain altitude when you’re near the edge of the screen? Even like typically the only apps that really make use of this data are art kind of art drawing apps, Procreate or. by 53 or something like that, and you can see when you use them, if you move towards the edge, you lose that that sensitivity or the data gets bad about the position of the stylist, but it doesn’t really matter because it just changes fairly subtly what’s happening with the brush. But for us, the difference between moving a card, which could even have the effect of deleting it if you flick it off the screen and inking is huge. And so that ended up being a To a total non-workable thing for us, and we had to step back from it.

So where did we land? So, so we went through this whole process of trying different things on different platforms, again in the research context, and then later, once we had kind of resolved onto the iPad as a platform and the prototype of what would eventually become a spin out product of Muse, by the time we went, went to make this transition from the lab to A commercial product we had actually settled on this, uh, position of the stylus as the solution, but then I think it was the early MVP and the early beta testing with with real users, not the initial usability tests. I think those, you know, if you got someone and to just try the thing for 20 minutes and and taught them how to use this different grip, that worked fine. It was more in practice over longer use in the real world where the edge of the screen problem became. Uh, basically a show stopper. And so now we’re in this mindset of, OK, we need to make this more reliable for real world use and we, we had to make the transition.

So what did we eventually do on that?

00:12:55 - Mark McGranaghan: So we ended up with two mechanisms. Uh, the first is for erasing. If you press on the screen with your non-writing hand, say your left hand, while writing with the stylist in your right hand, that will actually do an erase. So while you, while a finger is pressed down on the screen, You have a quasi mode to do a race with a stylus, and then when you let go of your, of your finger, then the stylus goes back to inking.

And then for selecting which ink you use, currently we have three options, a standard black ink, a sort of accent, purple ink, and a highlighter in yellow.

Uh, we have this uh flow where you can drag from any edge of the iPad screen. With the stylus, and when you drag out from the edge, it reveals a small subtle um ink palette where you have those three options, and then you can select among those inks like a standard ink toolbar.

Uh, and then optionally you can swipe back from that toolbar back to the edge of the screen and hide it again.

So this is basically the best, uh, set of compromises that we can come up.

We really like the quasi mode or you’re fairly limited on the iPad with much hardware options you have.

Uh, so for now we’re just using the one finger down and that works quite well for racing, but that only gives you one, you know, degree of freedom. And so for the other inks, we have this, this toolbar that you can slide out, and it is still a mode, but you have the option, but not the obligation to kind of see what mode you’re in by uh swiping the toolbar out. And if you want to just, you know, go into pure note taking mode or pure highlighting mode, you can just hide the toolbar and you have 100% of your content again. And there are also other subtle benefits to this approach. So like, for example, you can bring out the toolbar wherever you want. So if you’re making a note in the bottom right hand corner of your document, you can just swipe out the toolbar there, pick whatever ink you need and hide it again. Right?

00:14:49 - Adam Wiggins: And I think this is a great example of the, I guess, rectifying the big ideas or the dreams or the just fulfilling these principles which create constraints in trying to make something interesting, special, unique, solves a problem in a way that hasn’t been solved before.

But then you need to rectify that against the real world.

And in some cases, even though we set out to make a fully modeless interface, the color of your ink or the type of ink is in fact a mode. Uh, but I think maybe that one feels a little less dramatic, or a little less problematic by comparison to the The much um more diverse modes that you have in like a Photoshop, for example, where the difference between a selection tool and the fill tool is huge. And so you’re gonna maybe, you know, in that case on the desktop, uh, program, you’re gonna click on the screen somewhere and something’s going to happen, and it could be completely fill the screen with a color when you’re expecting to do some selection, and that’s extremely surprising and disorienting. With ink colors and ink types, OK, getting the wrong color ink is not desirable, and you go, OK, I’m gonna undo that, go back and and switch to the ink I want. But it’s all making a mark on the page. So the level of surprise and confusion the user feels, uh, if they don’t get what they’re expecting, I think is far more minimal compared to the classic full fledged toolbar.

00:16:08 - Mark McGranaghan: Yeah, and this actually reminds me of a subtle reason why modes are more viable on the desktop than tablet, which is on desktops, when you switch in a mode in the app like Photoshop or Final Cut Pro, it usually changes the cursor. So if you go into a fill mode on a photo editing program, it probably gives you like a bucket with paint flowing out or something like that. Uh, where obviously you don’t have a cursor on a tablet. So that’s another reason why you have to think more carefully and more creatively on tablets about modes.

00:16:35 - Adam Wiggins: And that comes back to that where your attention is, your locus of attention, which is you’re looking at your cursor because that’s where you’re about to do whatever you’re doing.

And so if that’s in the shape of a particular tool, obviously it’s not as nice as the holding the big yellow highlighter versus holding the pair of scissors, but it, it achieves some of that purpose.

Now, maybe we could talk a little bit about that kind of path from uh prototype to early product to maybe production product. Um, which might beg some more fundamental questions of why were we trying all these weird things? Uh, why, you know, why, why didn’t we just sort of go with the status quo? If we wanted to make an app that is good for collecting together research and pulling together some excerpts and making a few notes, there are some very well established human interface guidelines from Apple and just general UI, um, paradigms that exist both in the desktop world and, uh, increasingly in the sort of the touchscreen world.

And we could just, I guess, like any other app maker, make an app based on those standard paradigms and just put it through the the channel of what what our users want to accomplish. Uh, what, why weren’t we doing that here?

00:17:47 - Mark McGranaghan: Well, we, we have a very specific vision for how these tablet creative apps should look and feel, and we can go into what that is. Uh, as for why we haven’t just copied other tablet apps, I, I think there actually hasn’t been. A ton of original thought on tablet interfaces. Most tablet interfaces that I see are actually transliterated from either the desktop or the phone. Uh, especially see this with, um, like casual apps. They’re usually transliterated from the phone, by which I mean the app just kind of assumes you have a big phone and you’re still using it with like one finger at a time, for example, on one hand, uh, which we think is totally not, you know, the right way to think about tablets, or for creative apps, often they’re transliterated from there. The desktop cousins and you get things like, you know, toolbars which don’t necessarily make the same amount of sense on a tablet.

We think that the tablet interface is unique because it, it feels very natural to do a certain type of work, work where you’re reaching in with both of your hands like directly into the content and manipulating it.

So certainly things like inking but also things like, you know, arranging content um very directly on an interface. And so a lot of what we try to do with our interface design is make something that’s that’s true to that ideal. So one of my favorite examples here is moving something on a tablet. The standard way to do that on iOS is you press and hold and wait and then move and then maybe uh the app like snaps it into some box or grid or whatever, whereas surely the more natural thing to do is you just move your finger over the thing and it moves, right? Um, but that actually is requires quite a bit of technical and product work to actually make work correctly. Um, so we had a similar set of, you know, requirements if you will, with, uh, inking. It needs to be as modeless as possible, it needs to be incredibly responsive, it needs to not get in the way of your work and this process of going from a prototype to a production app, we basically maintain. Our same vision and goals, that’s been constant throughout. It’s more like understanding the limitations and the challenges that we’re going to have on the platform and confronting all the realities of getting apps in the wild with with users and uh finding something that’s still true to our vision, but that can really work in production.

00:20:09 - Adam Wiggins: Yeah, the typical paradigm for applications is you got the desktop world, which is you’ve already Mentioned is tends to be mouse cursors, keyboards, command keys.

There’s usually multi-winded Gy gooey stuff, and that is where powerful professional applications tend to be today. They’re obviously very well established, and you’ve got all your video editors and audio editors, and programming editors and word processors and architecture tools and and so on.

Uh, then you’ve got basically, new generations are growing up with touch screens. The touch screens are where most of the innovation is happening, but clearly a phone is not the place to edit a spreadsheet or write a long email or write a book or something like that.

Um, and so part of what we were, uh, researching as part of this, this lab, which is called Iot Switch, maybe a topic for another day, but was this kind of question of what does computing look like in 5 to 10 years and specifically for these kind of productive creative apps.

And productive and creative apps have the qualities that you described, which is you need to move very fast. For example, but you also need like a rich command vocabulary. You need to be able to do a lot of things. And so that kind of led us down this path of like, OK, we live in a world where touchscreen interfaces have become both the most dominant platform, but also where all the innovation is happening and yet they’re very restricted for doing more serious professional. Uh, type work. And so, that led us down this path of, OK, how can touch screens get more expressive? That leads you to tablets pretty naturally, cause they’re bigger, because you can use two hands, because there’s often a stylist that goes with it, um, and that kind of took us down this, took us down this road.

00:21:51 - Mark McGranaghan: And the endgame that I envision here is that you actually have 3 devices and 3 environments for creative work. So, your phone is used for on the go, reading, quick note capture, take a picture of something, save a tweet that you saw, that sort of quick action.

Your desktop, I imagine is still used for the most sophisticated and complex authoring environments, things like uh editing a big video, writing up a big paper in law tech with a ton of references, um, just the amount of, of real estate that you have, the richness of the controls with keyboard and mouse, um, I think that’s here for a while.

The place that I imagine for tablets is the sort of intermediate step. Where you’re, you’re reading, you’re annotating, you’re brainstorming, you’re forming ideas, you’re sketching outlines, you’re rearranging concepts and materials, and that seems really well suited to the tablet form factor.

You have a, a moderate amount of space, you have this direct manipulation where you can move things around with your hands, you can use a stylus which is very natural for freeform ideating and annotating. Uh, and it’s very flexible. You can take it on your couch and your chair, which is better for like, you know, reading and brainstorming than, you know, sitting at your, you know, stiff desk. Uh, but if that vision is going to come to reality, we have to treat the tablet as a third and unique environment. It can’t be designed like a desktop and it. Can’t be designed like a phone. It needs to be its own thing.

00:23:19 - Adam Wiggins: Do you think it’s asking too much for people to buy, maintain, and carry around 3 devices or I guess they would be carrying 2, although potentially 3 if you count the, for a lot of people, a laptop computer, clamshell laptop is really their desktop computer.

00:23:33 - Mark McGranaghan: Yeah, I think that’s a fair question.

When we’ve talked to users, and we’ve done a lot of user research for Muse and previously in the lab, a lot of people bought an iPad already, like on their own volition, because they had the same intuition, even if they didn’t quite have the words for it, they were like, I, I feel like I should be able to use my iPad for this like creative work, for reading, for note taking.

You know, it’s kind of, I want to be doing that. um, so they, they were already halfway there, but they consistently found that the software wasn’t there, you know, they had their social media apps and they had their, um, you know, transliterated desktop apps, but it wasn’t that they weren’t very satisfying. Um, so, so I, I think you guys actually are already well on their way to having this 3 device set up. What’s missing is the really good tablet specific software.

00:24:23 - Adam Wiggins: What do you think about other kinds of larger Touch screens or just touch screens in different, um, I guess, forms. So there’s the uh the Microsoft Studio Surface studio, I think it is, which is kind of a drafting table. They’ve got these additional accessories like this um this little puck control dial thing, or there’s something like Google Jam board. I think Microsoft has a, has a bigger one like that. There’s a few of these where they’re basically very large touch screens that go on the wall. You can kind of interact with them the way you would interact with the whiteboard, for example.

00:24:53 - Mark McGranaghan: So I think that’s very interesting. I think there’s a hypothesis that you move to uh 3 or maybe 4 devices, but they’re all slate style. They’re all touchscreen style. Um, I suspect that’s further off for a few reasons. Uh, one is there’s just a huge library of desktop. Software, and this is the most sophisticated software. This is where you have your most complex authoring and editing environments, things like, you know, Final Cut Pro. Uh, it would be hard to rewrite all of those from scratch, but you know, perhaps we do it at some point. Uh, another reason is just the hardware is not there yet. If, if you want to get a sufficiently high resolution times a sufficiently large physical area that that’s a huge amount of pixels. Our GPUs can’t handle it yet, obviously we don’t have the screens for it. The the touch resolution isn’t there yet. The touch latency isn’t there yet. Um, I, I would say we just got there for tablets in the last few years with the iPad Pros. Those have sufficiently high resolution and sufficiently quick response times that they can be used, uh, with your hands and it and it feels good enough, like the latency is low enough and the resolution is high enough. We’re not, we’re not quite there yet with these bigger surfaces, but I think if we get there with the hardware, which I hope we do at some point. Uh, then we could follow with the software and you would have a more unified, uh, touch base environment just with different form factors.

00:26:14 - Adam Wiggins: The makers of those operating systems are actually very actively working to try to merge them together.

The surface platform I previously mentioned runs Windows as an operating system and it uses, it also offers a trackpad and a keyboard, so it’s a totally standard, you know, desktop operating system in addition to being a tablet.

And then, of course, Apple’s taking baby steps in this direction with, for example, mouse support on the iPad. There’s rumors now that there will be a trackpad in the next folio keyboard, uh, but whether or not that’s there, you also have things like Catalyst to bring iOS apps to the Mac desktop, and so you just see this, uh, this, um, these efforts to try to blend or bring these, these two platforms together.

00:26:55 - Mark McGranaghan: Yeah, for sure. No, I do think there is a risk here of transliteration gone awry, uh, either on the app level or the OS level. So for example, if you just made a really big iOS that ran on a desktop, I, I think that would be totally inappropriate for professional apps. You don’t have the input richness, you don’t have the arbitrary processes, you don’t have the plug-ins, um, so I, I, I think we, I think Apple and others need to be careful there, but there’s definitely a world where they’re able to create, uh. Touch, um, touch OSs across uh the three form factors. This does remind me though 11 other thing I forgot about, uh, the, the bigger touch form factor is text input. This is something we’ve thought a lot about in the lab and as far as I know there’s no good answer for this, uh, onto devices yet.

00:27:42 - Adam Wiggins: So by this you mean you want to like enter in two paragraphs of text for an email or something and you’ve got a touch screen. What do you do?

00:27:50 - Mark McGranaghan: Yeah, and actually just like typing out a two paragraph email is the relatively easier case on desktop, there’s also a lot of like uh random access editing, like where you’re editing an email or you’re editing a document or you’re writing code and jumping all over the place.

And keyboards are also used very heavily as control devices, like people who are good at like Photoshop and Final Cut Pro, they do tons of stuff on the keyboard, they have all these shortcuts, all these control keys, and that requires like a very, you know, precise uh mechanism where you can do it without looking at your hands and you know exactly what you’re doing and you hear the click when you actually go to do it, things like that.

Um, so I think we actually have quite a bit of work to do on the, the input, the text input, the control input front for. Um, these devices to work and it may be that you actually don’t want to have a pure flat piece of glass. You actually want to have some, some physical devices like a keyboard or something else, um, to allow really rich, precise input for these, these bigger devices.

00:28:46 - Adam Wiggins: Yeah, for me, I think of the kind of the folio keyboard and the stylus as being required accessories for my iPad. With those, it ends up being a big phone. Which is fine, but I have a small phone that fits in my pocket.

00:29:01 - Mark McGranaghan: So, yeah, I think it depends more with the tablet on your use case, like I think there’s a use case where you’re, you’re reading a PDF for example, and you’re annotating it. I think you can get away without, with just a stylus in that case.

00:29:12 - Adam Wiggins: So as a sort of a a closing topic, can we talk generally about the research mindset versus the production product mindset you mentioned here that like the the text entry problem. I think it is very much a research problem.

00:29:26 - Mark McGranaghan: Yeah, well, the thing about research is it’s OK not to come up with a an answer or the correct answer. So I mentioned with the ink switching problem for our original research work. Our conclusion was like, we don’t know, sorry, you can only use black ink for now, it’s too bad. Um, that’s not an acceptable answer for people who are paying to use Muse, for example, they need to be able to select an ink. Um, so sometimes you have these problems where you just, you have to come up with something for the production app. Um, so by default, you would start with a, a non-research answer or a non-research approach.

00:30:01 - Adam Wiggins: For me, it’s really important in My work and on teams that I’ve been on to understand where something is on that spectrum.

So at Hiroku, for example, we did a lot of pretty innovative things. So this is a company both you and I, um, we’re working on some years back.

We did a lot of really innovative things, uh, in our space, but it was often important, I think when someone was working on something that was a truly novel problem, literally no one had ever, no one in the history of the universe had ever tried to solve it. Uh, or, or had solved it successfully.

And then you’re in there trying to, to, to solve that.

It requires a longer time horizon, a much more divergent set of ideas. You need to really break out of the constraints of the box that you’re operating in day to day, and that’s totally at odds with what I would call like the operational mindset, which is exactly what you said like you have to. You’ve got customers, things are on fire for them, metaphorically speaking, and you need to deliver them some kind of solution and it doesn’t do to say, let me go into my ivory tower and think deeply about this for the next 3 years and eventually publish a paper that said this is a problem that can’t be solved right now that that doesn’t work, but. The operational mindset naturally keeps you on shorter time horizons. It keeps you sticking to things that are more known quantities as much as possible. You want to look at what are other people doing. Uh, what, what are other similar, uh, applications or software packages or companies do to to solve similar problems and borrow from that as much as you can because those are known paths. Whereas research is all about this total unknown discovery thing, and that can be very rewarding in in the sense of stumbling across novel inventions, but uh it’s it’s not super practical for production.

00:31:51 - Mark McGranaghan: Yeah, exactly. And because these domains are so different, the constraints, the requirements, even the people who tend to like working on them, um, it’s often best for them to be in quite different, like different different organizational setups and like that’s one of the reasons why I think the Ink & Switch lab plus Muse is so interesting. Muse is inherently more industrial, commercial focused. Uh, the lab is inherently more research and exploratory focused.

00:32:18 - Adam Wiggins: Yeah, the the typical set up there and actually some of our inspiration for how we did set up I can Switch was the corporate R&D lab.

So this is something, uh, Xerox PARC is probably one of the most famous ones in the computing. Um, industry. So there you had Xerox, which is this big company that makes copiers and has money to spend and wants to think about, uh, what their future facing products are going to be in PARC being the small band of misfits that are working on basically inventing what came to become the desktop computer. Uh, but there’s other examples of this. Bell Labs is another very venerable, famous, successful, uh, lab that works this way.

And the idea is that you, you actually need and want to If not, uh, isolate, then at least partition people who are doing research, the kind of wild mad scientists thinking way outside the box stuff from the people who are responsible for the, the, the product that you’re selling today. And hopefully people can move back and forth between them and hopefully there’s mutual respect, but they just require completely different modes of operation.

So going forward from here, there’s more tool switching problems to solve. For example, a some kind of selection blasso thing is probably something that’s needed. Uh, do you have an inkling of how we’ll go about kind of solving that problem that stays consistent with our values, but also knowing how much we’ve grappled with the with how hard that problem faces?

00:33:43 - Mark McGranaghan: Yeah, so I have 3 ideas here. One is, I suspect we’ll move from ink selection to instrument selection. So again, if you go back to the physical world, you think about how you use your hands. You don’t only use it for inking, you use it for erasing, you might pick up an exacto knife, you might pick up a brush, uh, you might pick up a ruler. Um, and I think that’s, that’s a powerful metaphor. So I can imagine, for example, if you have a lasso, that becomes a sort of sibling to the inks that you can pick in the same way from the same sort of palette.

00:34:15 - Adam Wiggins: Now does that bring us back to, you know, where we started, which is basically the, you know, the on-screen toolbar that has all your tools, the Photoshop, the Procreate, and that sort of thing. Are we essentially have we, have we worked our way around backed ourselves into what is for good reason, a standard pattern?

00:34:31 - Mark McGranaghan: Well, I think that could happen. Um, but, you know, for one, we have this thing where you drag it out from the edge so you can hide it if you want. Um, but the, but the other idea I have here is going to a model where you have a small number of instruments that you’re actively working with.

So again, to go back to the physical metaphor, if you’re working on some projects on your desk, you don’t have like 100 pens, you know, strewn all over your desk, which is what happens when you have a toolbar on desktop app which has a And buttons, right? You are working on something, you know that for this project, I need like a black pen, an exacto knife, and an eraser. So you go to your shelf, you bring those three things to your desk, and then it’s very easy to switch among those for this current project, and then when you want to, you know, change your project, you go and you get different instruments from your shelf.

Uh, so I, I imagine something like that from you where you can pick, you know, your favorite black ink, your favorite highlighter, your favorite accent pen. And put them in your little toolbox and you have this small, very curated palette that you can swipe in and out when you’re actively working on a document and you’re not confronted with like the Photoshop style, 200 buttons, most of most of what you don’t know what they do type experience.

00:35:43 - Adam Wiggins: Although Muse probably also has the benefit that we’re not a drawing tool. So you look at something like concepts, for example, a really great iPad app with really sophisticated tool selection, and that’s appropriate there. Because that it is supposed to be a drawing app, technical drawing app with a lot of, you want a lot of options in terms of things like pen thickness. Muse is a thinking, scribbling, sketching app, and just as it would be inappropriate to have 50 different thicknesses of markers in front of your whiteboard, uh, it would be also inappropriate to have a huge amount of choice, I think for, uh, for the Muse use case.

00:36:16 - Mark McGranaghan: Yeah, and I think that’s true both for kind of in the moment, you know, so we have this, this. palette, small active palette that you’re choosing from, but also when you go to uh load out your palette, um, I think we’re going to be quite deliberate about how we present those choices.

So sometimes you see these interfaces where you can, you know, choose like basically put in a float for how thick you want your pen to be. I think that’s basically not coherent because the difference between a, you know, 1.71 pen and a 1.72 pen doesn’t really make any sense. It’s not useful. Uh, and indeed, if you go to a high-end pen store and you look at like the technical pens, there’s a very specific way that they’re sized. They’re basically size in increments such that if it was a much smaller increment, it would kind of wouldn’t make a lot of sense. It would be too small to be really noticeable or or obviously differentiable, so they’re kind of there’s a set of. Of sizes such that you can cover the full spectrum, but they’re not uh too finely degraded, right? Uh, so I can imagine for choosing sizes, for choosing colors, you have a, a carefully thought out, um, set of options such that you have choice, uh, but you’re not confronted with more choice that makes sense. Well, I think those were the main three things. So curated load out, uh, the, the swipe from the side, and what I call the, the high-end pen store where you’re, you’re given a set of options that kind of makes sense versus putting in floats.

00:37:40 - Adam Wiggins: And do you imagine that then, um, having grappled with all of this and, you know, Azimuth or uh altitude rather of the stylus is probably out for a while and quasi modes don’t have enough, um, dimensionality, uh, and there’s probably not going to be some kind of extra hardware button or controller or something we can make use of that the. Uh, hidden by default, small tool palette, uh, is basically the, the solution we’ve landed on for the, let’s say the medium term.

00:38:08 - Mark McGranaghan: Yeah, I think probably for the medium term. I, I, I do think quasi modes are actually very good.

Uh, so I, I definitely think we’ll continue to do the press to hold. Uh, I, I could imagine extending that slightly. So for example, maybe you press two fingers and you get a secondary option.

Um, I can imagine that that is configurable. This is a pretty common pattern in professional tools like you can choose what the shift key does. You can choose what the command key does, and there’s a, there’s an obvious default, um, but if you want to set that up, you can do it. And lastly, I could imagine as a sort of optional set up for people with a physical keyboard, you know, holding down 123, quasi mode engages your ink or your instrument 12, and 3, and so on.

00:38:50 - Adam Wiggins: Yeah, or having some kind of optional accessory. I think I saw this with uh Loom. It’s a cool little um iPad animation app that came out pretty recently, and they have the optional ability to use the teenage engineering MIDI controller, which is a little dial thing, and you wouldn’t want to require that obviously, but, uh, but maybe that is something that enhances the power of the tactility of the app.

00:39:12 - Mark McGranaghan: Yeah, exactly, and uh now this gets beyond a little bit the medium term, but Uh, one idea that I’m excited about is using the phone as a sort of sidecar control panel. So everyone has a phone, they always have it on them. What if you could just put it on your desk and like, you know, you link your tablet and your phone, and then your phone becomes your palette. So you could, you have 4 or 5 buttons there, you could have a finger cording there, you could have a little slider there, um, and that would give you a lot more degrees of freedom. On, you know, quasi modes without requiring a secondary dedicated hardware.

00:39:45 - Adam Wiggins: And is the benefit there, you know, that that in that case it’s not a tactile thing like an eraser you flip over or a dial you turn, it’s another touch screen. What’s the, uh, what’s the benefit other than I suppose just more screen real estate of having it on a separate touch screen?

00:39:59 - Mark McGranaghan: Yeah, well, it’s uh more screen real estate, it’s kind of separately programmable. Um, you can have it in a physically different place. So if you think about how you use a keyboard in your mouse, it’s if you’re right handed and you have your mouse in your right hand and you have your control keys on the left side of your keyboard, there’s 12 or 18 inches because that’s kind of the, the correct and natural spread of your hands, um, if you’re in a very neutral position, whereas if you’re, you know, have your hands right next to each other, it’s a little bit artificial. Um, so it’s, it’s, it’s an exploratory idea we to see, but I think there’s some promise there.

00:40:30 - Adam Wiggins: And certainly the idea of having your offhand, you see this with um Wacom tablets, often in professional like graphic designers, artists, types, uh, or you see it even in something like um uh people who play competitive video games, something like a. Um, yeah, these first person shooters where you need to, uh, be very fast and responsive, and you tend to use the mouse in one hand, which is kind of your move, shoot, aim, uh, thing.

But then you also have the keyboard which you end up kind of putting your, uh, fingers on certain keys that activate, I don’t know, switching, switching weapons or something like that. And the important thing is you don’t need to look at that hand because your fingers are in a particular position and they stay there.

So I could picture that for the phone, which is you kind of have your hand, your, your offhand, that’s left or, you know, left hand if you’re right-handed positioned over the phone in a way where you, you, you don’t really need to look at it. You can press to activate different things, uh, and just go completely by feel, even though the touch screen feels, of course, not like tactile buttons, but the shape of the phone and the position of the phone is something that you sense or feel even without looking at it.

Yeah, exactly. Very nice.

Well, anything else we should talk about on the topic of tool switching? I don’t think so. And if any of our listeners out there have feedback, feel free to reach out to us at @museapphq on Twitter or hello at museapp.com via email. I’d love to hear your comments or ideas for future episodes. All right, it was a pleasure chatting with you. Likewise, Adam.

Discuss this episode in the Muse community

Metamuse is a podcast about tools for thought, product design & how to have good ideas.

Hosted by Mark McGranaghan and Adam Wiggins
Apple Podcasts Overcast Pocket Casts Spotify
https://museapp.com/podcast.rss