← All episodes
Metamuse Episode 31 — May 27, 2021

Social media with Tobias Rose-Stockwell

Twitter, Facebook, and Instagram have transformed how we come to a shared understanding about our world. Tobias has been writing about social media for half a decade. He joins Mark and Adam to discuss velocity and virality in information dissemination; how to train your YouTube algorithm; rage tweeting; and how to improve the internet we all inhabit.

Episode notes

Transcript

00:00:00 - Speaker 1: I do think there’s a distinction between velocity and virality that’s important to make, right? Like a good book can go viral, a podcast can go viral, it just will go viral slowly, be a slow spread, and I think that’s actually kind of a goal is to have potentially like a low velocity, a high virality.

00:00:21 - Speaker 2: Hello and welcome to Meta Muse. Muse is a tool for thought on iPad.

This podcast isn’t about Muse the product, it’s about Muse, the company, and the small team behind it. I’m Adam Wiggins here with my colleague Mark McGranaghan. Hey, Adam, and our guest, Tobias Rose Stockwell. Hey there.

And Tobias, Mark and I just recently did an episode on video games and how Mark’s thesis that video games are where technologies kind of emerge first and later they make their way to productivity and enterprise software and that sort of thing, and I feel like our meeting, which was in an online game, a text-based. They called them back then in the 90s was a good example of this.

We knew each other virtually before we ever met in person for, I don’t know, a year or more.

And nowadays we take for granted that you meet people and even have great friendships, I think in, you know, your Slack channels or online conferences or colleagues you’ve only ever met through video calls, but I feel like that was quite unique for the time.

00:01:22 - Speaker 1: Truly, truly, I remember your characters on the mud, you had to. An amazing automation system in place for your characters, you just, yeah, you crushed that game.

00:01:32 - Speaker 2: That’s right, I totally forgot the mud world because it was all tech space, almost kind of had a Unix style in that sense, you would type commands and you would see these descriptions of what was happening with the action was very scriptable because you could make what were called triggers where you would essentially say, OK, if when you see the word you When you see someone’s name react in this way, when you see this happening, you could cause it to trigger another command.

People would do in some cases very sophisticated scripting.

I used a thing called Tintin. I think I was pretty simple with it, so I’m glad to hear it seems so impressive, but I think that did probably influence a lot of my thinking on kind of end user programming, personal scripting world of things.

00:02:12 - Speaker 1: Yeah, it was super impressive. Like this guy knows what he’s doing on here.

00:02:15 - Speaker 2: And Tobias, maybe you can tell us a bit about your background, what you’ve been doing kind of in the post mud time and then leading you up to what you’re working on now, which connects to our topic today.

00:02:25 - Speaker 1: Absolutely. Well, thanks for having me, guys. I really love what it is you’re working on and I always appreciate your Analytical and pragmatic perspectives on the world, understanding things in a more precise way. You know, we’ve known each other for quite a long time. I feel kind of honored that you have known me through several phases of my life, different chapters of my professional endeavor which have brought me here.

But when I was really quite young, I went and lived and worked in Cambodia for ages of about 23 onwards for about 6.5 years. I lived and worked on the ground in Cambodia. Crazy wild story for how I got there, but essentially met a monk when I was traveling through Asia, who was looking for help in rebuilding this irrigation system. It had been destroyed during the Civil War there, got sucked into this project that I thought would take maybe 1 year, I ended up staying there for 6+ years, rebuilding this big reservoir that affected farmers and helped them rebuild after this very, very problematic time.

And just was very, very interested in what I could do to most, help people figure out how to improve their own lives, and ended up rebuilding this irrigation system, getting interested in scale, interested in the motivations behind helping people help others more effectively. Which ended up, once I finished the project in Cambodia, I ended up coming back to Silicon Valley, where I grew up and started working on various projects to help people connect more effectively to humanitarian causes, and this was between 2009 and 2012, and it brought me to the world of online advocacy and really this is the earliest days of social media at scale, and I was part of this cohort of designers and technologists and Developers and documentarians that were really doing their best to try to motivate people to capture altruistic action from the largest possible audience, right? There was this promise around these tools that was very kind of intoxicating at the time, this kind of inherent goodness that could come from connecting humanity. And there was this thesis, this broader thesis, I think that was implicit at that moment, which is, you know, if you can connect the world, like we can solve the problems of the world, right? If you can just make people feel, if you can make people feel the poor people in India, the people that are struggling in Southeast Asia, if you can really just connect people to the feeling effectively, then you’re gonna come much closer to solving those problems, and that optimism was very real at that time. And it bled not just into the advocacy world, but I think it was very much a part of Silicon Valley at that time too. So I worked on a bunch of campaigns that were really trying to capture virality and maximize human attention and get people involved with causes, get people to feel, and that was its own very special era. I worked on a few campaigns that reached millions of people, it was very exciting time, and the tools enabled that in this very special way, and I was part of this cohort, you know, I’ve known. People similar to you have known many of the same people that were early at these companies that believed that these tools were fundamentally good for the world and, you know, in many ways they are, but I think that, you know, as we’re saying, there are questions about some of their byproducts. So fast forward a few years, I was working in New York doing design and management consulting, basically helping the executive teams of very large media organization that kind of storied in traditional journalistic institution that you would know if I named it, that was their executive team as the bottom was falling out of their business and they were trying to figure out how to Make money in this new media environment, and I was watching them as an institution begin to make decisions that reminded me fundamentally of types of attention capture tools that we had used. And the years previously, and they were making decisions that were very much based on trying to utilize this new media environment that many of my friends had built in such a way that I think we would recognize today as problematic, but at that moment in time we didn’t. They were really starting to change the editorial tenor of the stories that they were making, of the tools they were using to capture attention, and it was changing the editorial bent of stories of content and pushing it towards the extreme. And this was in 2014, 2015, you know, just the years leading up to, I think our great awakening to some of the problems associated with the stuff that came in 2016, but they were really fundamentally changing the tenor and the content of stories to capture more attention using these tools and these strategies.

00:07:16 - Speaker 2: This is what maybe nowadays we talk about is the classic clickbait titles and yeah, emotional activation that’s designed to in a very short time just get you riled up or activate some more primal part of your brain, and maybe that ties to your nonprofit work as well, which is also about emotional activation. But here you have this media environment where they have a very brief time to capture. Your attention and they’re just basically motivated to optimize for these headlines that push these buttons and activate you emotionally, even if that’s not sort of good journalism or really a healthy information ecosystem.

00:07:50 - Speaker 1: Definitely, yeah, and it’s not just headlines. The headlines are the most visible things.

The stuff that tends to be a little bit more pernicious is the editorial decisions that are made around stories to cover like what to cover, right? Journalism is this kind of important function. I see it as having three different fundamental pieces to it.

One is the basic verification of facts, right? It’s like, did something happen? Did the event happen, did it not happen, right? The next layer up is selective facts, like which facts are actually important for us to pay attention to in the world, right? And the top one is really like why does this matter? It’s editorialization, like why is this important for us to pay attention to? And what I felt like I was watching in real time was that the selective interpretation of facts that the sourcing pool that editors and journalists were using to start trying to kind of find nuggets of stories, they started to trend towards the outrageous, they were finding the stuff that would make people the most mad, right? You might see this in a headline and a story in which the headline will be, people are angry about X, right? You said, well, people are angry about X, wow, this is important, I should read why they. angry about X and then you look at what they’re actually sourcing for the quote unquote people that are angry about X, and it will be a Twitter user, maybe two, that have, you know, some 20 followers that a journalist was able to kind of go in and find they kind of spun a story out of almost nothing online and wrote a whole article about it, which is terrible if you’re you’re trying to get a proportional understanding of what people are actually angry about out there. And that’s just one strategy of many that are now available to every journalistic institution or traditionally journalistic institution.

00:09:36 - Speaker 2: For me, the first article of yours that I read that I think got a good bit of traction was titled This Is How Your Fear and Outrage are being sold for profit. Where you kind of broke a lot of this down, and I think nowadays as part of the mainstream discussion, especially with something like, for example, this Netflix documentary, the Social dilemma, yeah, I think we have more cultural awareness of this now, but at the time, for me it was a real eye opener, even being someone that was in technology that you were kind of breaking down the mechanisms, just sort of shining a light on exactly how You’re being emotionally activated or even emotionally manipulated and why that’s good for these media companies in this new era, and then you went from there on to a series of other articles including the dark psychology of social networks, which I think was a cover story in The Atlantic with Jonathan Hay, is that right? Getting to write with a high profile author, which is a whole other probably interesting topic to discuss.

Your latest article, which is how to stop misinformation before it gets shared, collaboration with Renee Dresta Unwired, which talks a bit about this kind of friction and so to me these paint and you’ve written other stuff, but if you read the three of these, and of course we’ll link them in the show notes, they show a building up of what feels to me like a thesis or a sense of trying to understand or grapple with the societal effects of this new information technology that defines our world now.

00:10:58 - Speaker 1: Yeah, I feel like that’s right.

So the second piece you mentioned, the piece with Jonathan Haidt at NYU, he’s a professor of moral psychology, and we did a almost a forensic unpacking of what happened between the years of 2009 and 2012 in terms of the feature sets that were implemented at these various companies, invented then copied, then propagated across our traditional social media kind of ecosystem.

And what that did, what these specific features did, and it’s a few features that we very much understand today as being kind of core to our information exchange, but having a simple one click share to send a piece of information out to your entire network. Likes, right, like fundamental likes and visible metrics associated with that content. And then the ability to algorithmically sort of feed. There’s kind of three pieces of the puzzle have dramatically changed the types of information that we are now seeing on a regular basis. And like each one of these features in themselves are great. I appreciate both the ability to reach a massive audience. I appreciate the signal that comes from knowing whether or not people are liking a thing. And I appreciate the ability to curate the crazy massive stream of information that’s not coming my way, but each one of them has kind of conditional failure modes that I think we need to understand and reckon with because Having access to what’s essentially kind of the brain stem of humanity now, right? If anyone can put something out there and anyone can make something go viral, there are tendencies within the system now towards what Daniel Conman would call System one, right? System one thinking being this emotional, reactive, impulsive, kind of instantaneous, fast thinking. Right, which is one of these partitions in our brains, and the other being System 2, which is this more reflective, deliberative, slower processing and thinking, that the entirety of the architecture of the internet in its current form, it’s built for maximum speed and morality, it’s orienting towards system one. And that I think can be seen in so many institutions and so many changes and so much of the zeitgeist of our exchange now is in this kind of emotional space.

00:13:27 - Speaker 2: Yeah, and I think that’s a great tee up not only for our topic today, which is the promise and peril of social media, but also what the muse tie-in is because a lot of what we talk about on this podcast is product design, but it tends to be more tools for thought, productivity, kind of more private things versus these public sphere, you know, political and social discussions.

But I think there is, when you talk about that System 1, System 2 brain, a big part of what we want to see Muse do or part of what our mission with our product and the company is, is to help us all activate our system two brains more reliably.

Then when we realize there’s something important we need to think about whether it’s in life, work, social issues, what have you. that there’s a way to kind of remove yourself from that energy and those outrage circles or even just the heightened emotional, more primal state and go to a more thoughtful, reflective, slower thinking because we believe at least as complex as the world is today, you sort of need that in order to really make good decisions.

00:14:30 - Speaker 3: Yeah, in addition to the System one versus System 2 access, I think there’s an axis of de novo ideas versus remixed ideas, and I think basically all good ideas are social, they’re remixed, they’re transmitted from other people and developed that way. And obviously, social media is an incredibly powerful technology potentially for facilitating that. And so we use the product right now, it’s quite single player and it’s focused on developing content. You bring in content and you build from there, but I think if you look at the broader process of thinking and developing good ideas and coming to useful conclusions, the social ecosystem is so important.

00:15:09 - Speaker 2: Yeah, and I have my own, I think there’s a lot of creative people do.

Twitter is certainly my social media of choice.

I’ve heard it described as sort of the social network for ideas, and if you’re a person that’s, you know, looking to seed ideas in your work in your life, I think Twitter is the right place for that, but it has these two sides, which is it can be a source of incredible ideas. Inspiration, connections with new people. Certainly that’s where my professional network is. I’ve met many amazing people, including a lot of the guests for this podcast, but have just had the seeds of good ideas so often. I mean, there’s a reason why we want to add a lot of media cards to use, but one of the first ones we did was a Twitter card precisely because bringing in a thought provoking tweet. As a foundation for some deeper thinking is a very natural thing. But the flip side of this is what you’re talking about these loops for journalistic outlets and individuals as well, seeking that the sweet, sweet dopamine hit from those likes, right, and you discover that those controversial or outrage generating things or things that just do those emotional System one activations get you. More of that positive feedback and so then you’re sort of inclined to do that and it just creates a setting where you have these kind of information pathologies and negative loops and yeah, it sort of is very counter to the thoughtful, having good ideas, focusing on your work. So there’s this duality, there’s these two sides of it that I feel are equally strong.

00:16:39 - Speaker 1: Yeah, that’s really important. As you spoke about how the general zeitgeist has kind of absorbed some of these memes about the internet being a place that prioritizes outrage and the people tend to kind of understand.

The problems associated with this now, there’s, you know, a lot of great content out there and a lot of talking heads that speak about the ills of technology.

I do think that the hyperbole around it is actually kind of a big problem. I think that if you’re too hand wavy about what the problems are, then it actually doesn’t help us solve them. It doesn’t help us build better tools, doesn’t help us fix the tools we have. I think it actually just is pretty detrimental to the conversation as a whole.

00:17:21 - Speaker 2: Right, so there’s this progression where I don’t know in the Mid late 2000s, it was the sense that this up and coming new information world was unmitigated good, kind of a sort of exuberance that in hindsight seems naive.

Then sometime around the time you’re writing these first articles, 2016, 2017, 2018, there’s a few folks like yourself that are kind of raising the flags and society struggling to figure out, wait, what happened here, things are changing, maybe in a way we don’t like, it seems related to this new technology, you know, what’s going on.

And then you come to today, and again I do feel there is a more mainstream sense that OK, social media is this powerful technology, certainly these internet giants in general wield a lot of power.

It does seem sometimes boiled down to uh Mark Zuckerberg is the devil, if we can get in from the front of Congress and slap enough regulations on him, then everything will be OK.

It feels like a vast overciplication that feels like this is a to borrow one of Mark’s way of talking about things that Society needs to metabolize the change, and that’s going to take some time, and we need to thrash around and try some weird stuff, and maybe some of the solutions are governmental, some of them are technology product solutions, like some of these ones you’ve proposed in your various articles here, and some of them are personal, right, how we choose to cultivate our information diets and make good choices that will allow us to get the most out of this brave new world of information without being maybe sucked down its worst rabbit holes.

00:18:48 - Speaker 1: Yeah, I don’t know if either of you had a chance to catch the tech hearings that happened a couple of weeks ago.

00:18:54 - Speaker 3: I feel like there’s one of these every month, so I feel like, yeah, totally.

00:18:59 - Speaker 2: I’ve seen some of them, at least for understanding what happened there, I rely on people making dunk tweet jokes.

00:19:06 - Speaker 2: You’re part of the problematic, and then I extract what happened based on that, yes, yes I am.

00:19:09 - Speaker 1: What was very clear to me, and I took notes of how many different members of the house had their own personal grievance about technology.

I’d made a distinct list of 18 different areas that were very much not related, other than the fact that they involved Facebook, just this whole kind of panopoly of different grievances that they had and what came to mind for me is this. This is not just technology’s fault, right? Like we’re just inhabiting technology more and so we’re bringing all of our problems with us, right? We’re now just living in these digital spaces more and more and more, so we’re bringing a huge portion of these problems with us.

Now that doesn’t reduce the importance of focusing on the tools and the specifics of the tools, but I think it is important to remember that humans are complex and our problems are complex naturally. If we add a whole new layer of kind of virtual existence onto that, we’re going to end up with a bunch of new problems and also all of the old ones manifesting in a different way. So I think it’s just important to recognize that a lot of these issues are things that we’re bringing into the fore with us.

00:20:17 - Speaker 2: Yeah, part of the concern would be technology is an amplifier for some of our natural sort of bad traits. We’ve made the comparison before to food, for example, where we came to this realization that a lot of fast food is actually quite unhealthy and feeding into a lot of health problems in the modern world, but that’s because it was sort of optimized to push our sweet and savory and salty buttons because these are Tendencies we crave these things in our natural environment, but we’ve found a way to kind of like supercharge what you get in a way that has these negative effects.

And I feel social media has a lot of that. For example, follower counts, I think is what I hear folks talk about, which is like we have this natural status seeking behavior and getting a certain number of followers shows that you’re, I don’t know, important, you have prestige, people care about you or something like that. And so then that turns into weird status games maybe of trying to game the system or just treating people differently based on their follower accounts or whatever and you know one question there is should we hide that because that basically just brings out some bad qualities in us.

On the other hand, will people find other ways to seek that same because again it’s not the technology that’s something that’s inside us as humans is to kind of be status seeking.

00:21:31 - Speaker 3: And just to elaborate here and to further motivate, I think there are a couple other dynamics that are making it even more important.

So there’s this baseline dynamic of there’s like a very high powered social technology and people have always talked to each other and written stuff down, but now just like a lot more of that with much sharp edges. OK, that’s kind of what we’ve talked about so far.

There’s an additional dynamic that I would call the revolt of the public na. This is the, I think his name is Martin Gary thesis, he wrote an amazing book about how basically people able to talk to each other outside the confines of traditional hierarchical structures is highly threatening to those structures, and this would be like traditional journalism, bureaucracies, higher education. And those institutions correctly perceived this huge existential risk, so there’s some kind of fighting for their own lives happening there.

And then another angle is a lot of the most contentious stuff has to deal with politics, and the reality is that politics, especially federal politics, play a much bigger role in people’s lives. Now than it has in the past, you know, government spending is what, 40% of the economy plus all the indirect impacts like people are correctly interested in what’s happening there. So these three things stacking on top of each other and creating huge stakes. So it’s not surprising to my mind that people have strong feelings about this stuff.

00:22:42 - Speaker 1: There’s a great article, a Paul Graham article from, it was 2007, 2008, the golden age of Paul Graham essays,

00:22:50 - Speaker 1: so to speak, yes. About how to have better arguments online. One of the anecdotes that he has in there is that people are now just accessing more information.

There are more opportunities to collide online. There are more opinions that we’re being exposed to, and because of that, people are going to be arguing more, right? There’s this kind of natural trend towards increased opportunities to disagree in a public forum. And there are certain things that happen in a public forum when we’re disagreeing and the kind of ergonomics of that space or the design of that space online will push people towards a particular kind of disagreement or a particular kind of agreement potentially depending on how that space is designed.

One of the things that we speak to in our Atlantic piece is this kind of idea of what’s called moral grandstanding.

If you imagine us having this conversation right now, this is a great example. We’re having this conversation, this is over Zoom. I can actually like see your faces, we’re having interaction, right? There’s not an abstraction layer, you’re responding to me, I’m responding to you, I can see your eyes. If I say something mean to you, if I disagreement with you. There’s a desire that I have to reduce any kind of empathetic stress or sadness I might cause you. I’m not going to call you a name because I’m actually getting some kind of empathetic response from you in this physical stimulus that’s coming back to me, this visual stimulus is coming back to me. In the digital world that’s hidden from view, right? So you’re abstracted out into this profile pic, if you even have a profile pic, you’re just a kind of abstract creature out there. Make sure you’re even a creature in my mind, right? You might just be a thought in my mind that I’m angry about. But not only that, there is this additional layer in most of our social spaces online in which, I mean you can imagine us having this conversation live with An audience of people around us, right? And like an audience of people around us live that were rating us and there was a number attached to our faces that was going up or down depending on the quality of our arguments and what was actually happening in real time.

00:24:56 - Speaker 2: I already feel anxious with that description.

00:24:59 - Speaker 1: It would fundamentally. Change the content of our conversation in a drastic way, and it would not push us towards conclusions between us or attempting to find truth. It would actually push us towards the approval of the people watching us and that is what a large portion of our social media platforms are designed around right now.

00:25:19 - Speaker 3: Yeah, I mean, this kind of gets into the personal information, hygiene and personal information, gardening.

00:25:26 - Speaker 2: Yes, so Tobias, in your articles you often are speaking essentially to the platform creators, you’re saying, OK, Instagram or Facebook or Twitter, here are features you could build or maybe users could demand those features, but essentially change the tool in order to have better social dynamics, and absolutely I think the tool evolution, we’re seeing that happening already, you describe a lot of that in this kind of friction concept in your most recent Wired article, and I’m sure that’s going to be ongoing for a while.

But for me there’s also this question of to what degree I’m not working on a social media tool, so therefore, what actions can I take in my life and I’m curious how you see the balance between what we need to do here is kind of demand changes from these tool creators versus we can make choices in our own life and for me, part of the value in your article. has been, I have a more critical eye or have more self-awareness about my own how I engage with social media and how those things trigger my primal emotions and with that awareness, I can then make maybe better choices about making sure I get what I want out of these tools and avoid the negative spirals.

00:26:36 - Speaker 1: Yeah, absolutely. There’s a really important balance, I think, to be made for, you know, with the designers of the space versus the things that we can do as individuals, and I, I do think there’s a lot of things that we can do as individuals, fortunately. It depends on the particular type of problem that you’re facing online, but in general, if we’re looking at it in the context of like the System One, System 2 dynamics. Usually, if I am triggered by something I see online, a good kind of mental model for me is to try to force myself in those moments where I feel the desire to basically like rage tweet back at someone or share a thing that I am incensed about, right? And that’s one of the big ones is just kind of propagating highly emotional content. And you know, not all emotional content is bad, right? Emotions have a purpose in our lives, right? I think it’s like important to recognize that they spur us to action and bring us passion.

00:27:22 - Speaker 2: I’m doing what I’m doing in my life because I’m passionate and have strong emotions about those things.

00:27:30 - Speaker 1: Exactly, exactly. So emotions have reasons and they’re kind of, you know, internal heuristics for us to determine what is important on a day to day basis.

It’s like they’re helpful directors for what we should focus on, and you know, biologically and socially they have kind of foundational adaptive purposes in our lives.

The problem is that in, you know, talking my application in these digital spaces, we’re actually taking some of these emotions which are kind of meant to pass through our bodies and just like and be noted and then direct us towards a specific action. We’re encapsulating those in little time capsules, right, we’re textualizing them. And then we’re sending them out on their way into a network to go on and kind of trigger other people and have a life of their own as they’re ping ponging around our networks and causing other people to be emotionally activated, and they kind of live in a semi-permanent state, even though initially an emotion is just kind of meant to pass through our system.

00:28:20 - Speaker 2: I hadn’t even thought about the sort of responsibility to others by passing on emotional content. I tend to think in terms of again how can tools be improved, but then how can I make changes to my own information tooling and sort of managing those processes in my life.

Mark and I talked about this quite a bit in our previous podcast on what I call the information age. There, I think we were focused more on just the raw quantity of information that we’re attuned.

Through most of human history, we lived in time where information was scarce and so we seek to get as much news, like even the word news implies, yeah, like what’s what’s the latest? I want to be in the know, I wanna be connected, it’s important to know what’s going on, and now we live in a time of it’s such incredible information abundance that actually the problem is curating and calling all that out, but Mark, I’m curious, you know, expanding on what we talked about there, I feel like you have a pretty good Set of approaches for cultivating your own information ecosystem, putting it together with some of what we’re talking about here, what are like techniques that work for you or you recommend for others.

00:29:25 - Speaker 3: Yeah, there’s a lot here. One quick thing on the like emotional front with weird things coming at you on Twitter and stuff, I adopted a simple practice of whenever something gives me bad vibes on Twitter, I just block, or, you know, turn off we tweets or mute or whatever is appropriate the situation. So I found like, you know, life’s too short for bad vibes coming through of your own choosing, right? You have total control of what you see on Twitter, just choose to close that stuff off.

00:29:49 - Speaker 2: I feel like the immediate devil’s advocate on that is there are bad things in the world and it’s not helpful to not be exposed to problems.

00:29:57 - Speaker 3: Yeah, it’s not like things that you disagree with or things that are bad that are happening, it’s things like bad faith and People being obnoxious and stuff like that. And it’s the default that you continue to see people that you’ve previously followed, and for a long time I just had this out for whatever reason, I wasn’t inclined to change that. I just kind of accrete followees over time, but then I took more responsibility for my timeline and things are much better now.

And that in general is a big theme for me with personal information management.

Like no one’s gonna save you. In particular, the algorithms are definitely not going to save you.

If you just go to facebook.com and click on whatever’s at the top, you’re gonna have a bad time. Likewise, if you just go to like washingtonpost.com and read everything there, you’re not gonna be very well informed. So you need to take a lot of responsibility. For your own information ecosystem, and that’s why I like things like Twitter, podcasts, email newsletter, niche Discords and Reddits. These are places where you opt into individual small creators, curators, communities who you believe have good insights and relevance to you. By the way, I just realized, Tobias, when you were talking about the three like pathological features of social media, that podcasts are kind of uniquely resistant to them, so podcasts can’t be retweeted. They haven’t really been subjected to algorithmic feeding.

00:31:12 - Speaker 2: Spotify is working on it, I think, but yeah, so far.

00:31:15 - Speaker 3: Yeah, but mostly they’ve been quite resistant. I think that’s an important dynamic.

00:31:19 - Speaker 2: We see that on the side of being podcast creators or what have you, which is, it’s actually hard to market or spread a podcast and happily and quick aside, I’m very thankful for everyone who’s tweeted out links or referred us to their friends, but yeah, it’s just a podcast can’t go viral, that’s all there is to it. It can grow pretty slowly and steadily and organically over time. And it’s a downside in some ways, but actually really a benefit in the sense and probably why I’m drawn to podcasts often as a place to get more called System two inputs into my life.

00:31:52 - Speaker 1: Definitely. I do think there’s a distinction between velocity and virality that’s important to make, right? So like a good book can go viral, a podcast can go viral, it just will go viral slowly. spread and I think that’s actually kind of a goal is to have potentially like a low velocity, a high virality thing in which people are like, oh, you should listen to this, you know, like a word of mouth recommendation goes a long way, especially for something like podcasts. Mark, what you’re talking about, you know, your personal curation systems, I find that to be so critically important these days. I like to think about algorithms as kind of like a dog. It’s something that is like an intelligence that has a very simple understanding of what it’s trying to do for you, right? And it’s using limited inputs to determine what it is it’s trying to serve you on a regular basis, so the same way you might train a dog. I aggressively, for instance, train my YouTube algorithm. It’s constantly serving me up things on a regular basis and probably once a week I have to go through and say, I don’t like this, I don’t like this, I don’t like this. And as a result, I get, and I use this kind of personal heuristic, which is like, how do I feel after watching this video? How do I feel after going through my feed here and checking in with yourself, taking a moment to recognize, I think there’s. The regret test, very basic test. Like if you regret your experience on this tool, then you need to change the tool or get off the tool, right? Unfortunately, that is something that we can do. So I have aggressively called and trained my YouTube feed and now most of the time, you know, again, with this kind of weekly training regimen, it becomes a beautiful source of inspirational educational content that I really get a tremendous amount of value from.

00:33:35 - Speaker 3: Yeah, and I think on this personal curation stuff, you need to be really willing to go against the grain, because the grains in the wood here are leading down to places that are bad and in many cases just misinformed or wrong, especially with like the kind of complex topics of the day, for various systemic reasons, the stuff that you get in the prestige outlets isn’t very good.

And so you really got to go around and crawl and find the weird Twitter accounts. I’ve joked that on many complex topics like the recent pandemic, a lot of the very best information on the internet came from cartoon Avy pseudonymous Twitter accounts. I mean that 100% seriously. So you need to be willing to embrace that kind of weirdness in your own personal information curation if you’re gonna want to have good outcomes, I think.

00:34:22 - Speaker 1: So what you’re speaking to there is really interesting because you’re absolutely right, there’s been both like a decline in trust of journalism at large, right, that’s a big, big problem kind of we’re facing right now is that we don’t feel like we can trust historical institutions that we used to rely upon for this basic validation of perspective and worldview that, you know, we used to have kind of the forced to feed, essentially, right, we had a forced algorithm, which was major news and media, that was our content.

00:34:47 - Speaker 2: There’s 3 channels. They all kind of say the same thing. You can pick one based on whether you like the color of the presenter’s tie, but they’re basically giving you the same information, like forced consensus essentially about what is factual and what is not, yeah.

00:34:56 - Speaker 1: And so as the internet has kind of detonated and exploded this traditional media hierarchy, it’s also along with it, detonated this trust network that we had, right, of trust and specific authorities, and for a lot of people, that’s laid bare and especially during the coronavirus, is this kind of proxy network of what I call reality anchors, right, we used to have kind of media anchors, which We looked for, we had the Walter Cronkites and the Dan Rathers who we looked to for specific points of reality, how this happened, this didn’t happen. Now everyone kind of has their own proxy network of reality anchors, right? It might be the follower on Instagram, there’s the anti-vaxxer, it might be the IDW intellectual dark web thought leader, you know, it could be any broader group of humans, but there’s a really tremendously on display because In a way that they weren’t previously I think that’s laid bare a lot of the problems of our epistemic environment now because we no longer have a specific kind of source of consensus that we can look at collectively. We instead have this decentralized individuated network of reality anchors that we point to.

00:36:07 - Speaker 3: Yeah, and it’s super messy, which I think circles back to a lot of your work because a lot of these so-called reality anchors, they’re not based in reality, and we’re asking every individual to come up with their own notion of reality and inevitably, most of them are wrong and all of them are at least different. And so what do you do with that? It’s an extraordinary mess, not an easy answer.

00:36:24 - Speaker 2: Right, yeah, I find myself conflicted about, on one hand, the centralized force consensus, as you called it, I think that left a lot of important viewpoints out in the dark and Living in a world where I do have access to much greater diversity of opinion and ideas and perspectives, I think I once heard the, it was Megan McCurle that described the internet as a quote freak liberation front. And I like that because I’m interested in lots of weird things and weird things were sort of hard to find when you did live in a world of broadcast media, there’s only a few channels, there’s only a few newspapers and what have you, and I’m basically much happier in a world where I do have access to much nichier things, but then yes, we lose that shared consensus, shared understanding of reality.

And I would not have thought this previously, but I think in past years I’ve seen where in a way there is something to be said for whole society basically agreeing about the basic facts of reality, even if some of those facts are actually wrong, but if we agree together we can at least make decisions together, whereas if we have a totally different view of reality, we are paralyzed. And an action maybe is worse in some ways than taking action on a false understanding that that then perhaps in the future you can correct as you realize, I don’t know, for example, that the concept of the food pyramid is ridiculous.

00:37:48 - Speaker 1: Totally, you know, if you look at history and if you go back.

Far enough, you can see there’s a correlation between our ability to organize in the larger and larger groups with our ability to kind of understand each other and share common cause, cooperate better, share good information together collectively, that kind of thing.

Um, good information, qualified good share information together at all, right, share ideas and myths together and we kind of emerged from this fog of, we think about misinformation being a new problem, but we kind of emerged from this fog of misinformation that was actually really just kind of endemic in society, right? Like we used to believe crazy, crazy things, you just have to scratch the surface. of history, you don’t have to go back that far to see that there was just this kind of constant barrage of falsehoods or half truths that would pretty much become accepted truths, right, commonly accepted truths for a long period of time. You probably remember a handful of these from your childhood, maybe even, right, which like a Ouija board might summon the devil or a story about aliens or something, but we had this like really strong. Consensus forced consensus mechanism in place to kind of keep everyone on the same page, but I think the natural state of human exchange is much more oriented towards misinformation than it is towards fidelity and truth, and I think that’s important that what we’ve done with the internet is just kind of backspaced to a previous era in which misinformation was emergent and common, right?

00:39:17 - Speaker 2: Yeah, that brings to mind to me what they call the yellow journalism era. We talked about clickbait headline titles, but in fact there was a version of that for when they were selling sort of one off newspapers by the newspaper person, probably a young man standing on a corner just yelling the headline, and there were these patently. Ridiculous headlines about you know we’ve gone to war, other things have happened because they just wanted to sell papers basically and I think eventually journalism settled on a better model that was more kind of oriented around longer term useful truths rather than one off attention grabbing headlines. But yeah, to say that that’s new to this era would be quite incorrect.

00:39:56 - Speaker 1: I’ve been especially fascinated with that particular era of when we basically began to use advertising to propagate journalism, right? So the era of, it was actually the era before yellow journalism. There’s this whole century in the 1800s in which we kind of figured out that you could sell ads with newspapers. And in that process, you could also make a lot of money by making sensational claims.

The New York Sun was the first paper to do this in the early 1800s, and they began their whole business in this whole kind of enterprise of ad-based journalism by literally making up fake stories and propagating fake stories on the corner with the guys hacking it and getting people riled up about stuff. There’s a story about animals escaping the zoo and marauding and killing people that they put in the stories, extremely irresponsible journalism. That like caused a mob to go out and try to find these animals and put them down because they read it in the paper and they thought it was real, which is literally just a falsehood, just an entirely made up story that they wrote to sell papers. And the years after that, there was a story about bat people on the moon that there was astronomical observations of bat people living on the moon, life on other planet, that was a sensation, sold a lot of papers and you know, kind of caused this crazy confusion about life on other planets that was a hugely popular hit.

00:41:15 - Speaker 2: Yeah, it also makes me think of George Orwell and the War of the Worlds broadcast and just yeah, the power of media, particularly when it’s new, I think maybe we build some, I don’t know if it’s resistance or just awareness, ability to separate and say like, well, because I hear something on the radio, doesn’t mean it’s true or because I see it on the TV doesn’t mean it’s true, and maybe we’re still in. era for a lot of these internet technologies where, well, certainly I hope people realize that when you see something on the internet, it may not be true, but maybe we haven’t yet developed these antibodies or coping techniques for you see an outrageous tweet and you immediately reach for the retweet or the reply button and we haven’t quite developed the better practices for managing that.

00:41:56 - Speaker 1: I think the word is antibodies. I think that’s a great way of thinking about it like a cultural antibody against the thing.

The way that changed in that era in the mid 1800s, it was through this process of professionalization that was actually also market driven. It wasn’t like the government got together and was like, we’re going to tell you what is true, what is not true, but it was this process of consumers learning that they couldn’t really trust specific papers and they could trust other papers, right? So in the New York. Newspaper scene that was the sun and these other papers became kind of like trash papers, you know, you fool me once, I’ll believe you, you fool me twice. I’m definitely not going to buy you again, right? And one of the papers that tacked up the brainstem as opposed to down the brainstem was the New York Times, and they actually started to build a reputation around actually reporting. facts and being consistent about that and being trustworthy.

If you think about it as a little bit like an iterated prisoner’s dilemma for us and accurate information, right, in a world in which there’s nothing but crazy outrageous headlines, month over month, you’re going to learn to try to focus on the things that are verifiable and reputations come to matter. They actually are built over time.

00:43:09 - Speaker 2: It does seem like the value of news is in it being true.

It gives you information about how to navigate your world, you know, in the previous, more pre-civilization time news that there’s a dangerous bear prowling around outside the village is really useful for keeping yourself safe if it’s true, but if you’re spending much of time avoiding bears that aren’t there. You’re just draining energy on something that’s not valuable.

Now I will argue that a lot of our news and information consumption is fundamentally entertainment nowadays. I think the degree to which political news is essentially a kind of entertainment kind of following your favorite sports team kind of thing, and I think that’s fine, but at least it seems like in theory, bad information. or incorrect information doesn’t help you navigate the world and has a maladaptive effect and pretty quickly you’re going to see, OK, I made decisions in my life based on this incorrect information, therefore my life went less well than it could otherwise and then eventually you’re going to want to turn that around and get true information.

00:44:09 - Speaker 1: Yeah, I think that’s an important point.

00:44:11 - Speaker 3: Yeah, I think that’s very much the case. Like, I don’t think we’re going back to the world of mainstream journalism as the purveyor of verified facts. Even entities like the New York Times are already on the train of entertainment and aligned analysis, let’s call it, and they’re moving even harder to that model now as they catch up with substack.

And so the question then becomes, where are we actually gonna get these true facts, which are the sort of public good, and once the fact is out there, you can’t charge for it. So who’s actually going to provide it, and it’s a very tough question and right now the honest answer is that it’s almost no one, so you kind of have to do it for yourself.

And we’re starting to see now people can fill this hole a little bit by becoming named individuals who have a secured. attached to them personally, and that way you can bootstrap up into something that you can trust in the way that you can’t trust a large institution because it can launder and transform reputations, right? So it’s pretty gnarly out there right now. And I think it’s important just to be honest about that and to expect that you’re going to have to navigate that. And furthermore, that the whatever new equilibrium we reach isn’t going to look like the old world. So be ready for that too.

00:45:15 - Speaker 1: Yeah, I think that’s a good point. The tools that we have now.

And this might be a good moment to segue into my most recent article just quickly about friction, just like there are ways that we can potentially reorient the system to at least trend towards accurate information versus falsehoods, right? If you look at the trends of misinformation right now, it occupies the same reach as a traditional media broadcast, right? Some viral mistruths can go just as far and as fast.

To a wide audience as traditional media broadcasts where they can reach millions of people, you know, Pandemic was a great example of that, 12 million people that saw it.

So potentially there’s a concept of friction, which is basically throttling misinformation as it spreads through a network and kind of reducing just the inherent capacity for virality across the board, right? So WhatsApp did this, they reduced the net number of possible shares. They reduced the number of groups that you could actually share a specific message to, right, and they just throttled that down. I forgot what the final number was, but they basically just, I think it was, they throttled down to 5 groups from infinite groups essentially, so you could before just copy and paste a message into as many different WhatsApp groups as you wanted to be before. You wanted to get, to get your message into and they just throttled that down and that fundamentally improved the speed at which misinformation could potentially travel, can still travel, but it just travels more slowly and is less impactful. And so I think that shifting things towards a slightly slower propagation system can actually dramatically improve the types of misinformation we’re responding to on a regular basis.

00:46:55 - Speaker 3: Yeah, and I think as we’ve discussed, we’ve already seen some evidence for that in the different types of discussions you get in the different mediums. So again, the ones that I find most compelling these days are podcasts and email newsletters and small Twitter accounts that you can’t quote to a podcast, and that’s in many respects that feature or not a bug. I’m more optimistic about those types of structural approaches than content-based approaches, cause the content-based approaches is so fraught, and you’re back to the oracle problem of what’s actually true versus not, and I’m sure people are gonna try that and continue to try it, but yeah, I do think the structural approach is more promising.

00:47:27 - Speaker 1: Definitely, and there’s The problem of censorship and kind of the rights of saying what you could say, you know, Reneeres, my co-author on the most recent piece, she has this other article and this letter saying, which is the freedom of reach is not the same thing as freedom of speech, right, which I think is a really important distinction when we’re talking about this stuff, because, yes, like you do have the ability and the right to say whatever you want, but virality is not a right. OK, yeah, you don’t have the right to reach hundreds of millions of people with whatever you want.

00:47:57 - Speaker 2: Well, looking forward to the future, Tobias. What do you see the solutions potentially being? Is it individual information curation? Is it these companies that run these products, making these feature changes? Is it government regulation and involvement, or is it something else? Where does a happier future for our relationship with social media, with information spreading, with virality, how do we get there?

00:48:21 - Speaker 1: Yeah, I really think it’s a mix of both, it’s both thoughtful design changes on the platform side to help us make better decisions and better defaults, to have better defaults about the types of information we’re regularly exposed to.

And then I think as we’re speaking, this kind of cultural antibodies piece of it, it’s like, it’s really important for us as individuals to be aware of the information ecosystem that we’re living within.

And to approach stuff with good faith skepticism, if that makes sense, and to really try to make decisions about what we’re sharing consciously and to try to push ourselves, I think, to a more reflective state.

As opposed to this more impulsive state, and I think that’s a big piece of it, you know, a quick anecdote about when I’m triggered by something online, I will invariably, if I feel the desire to reach for the retweet button or the rage tweet, I will take 4 deep breaths before I do anything. And in that moment, just that small act of taking. 4 deep breaths will usually allow for my emotional reaction to dissipate in such a way that it feels like a much more healthy thing I’m about to say or act I’m about to make online.

I think that just for deep breaths would go a long way in improving the type of internet we inhabit if everyone can do something like that.

00:49:37 - Speaker 2: Couldn’t agree more. Well, let’s wrap it there. Thanks everyone for listening. If you have feedback, write to us on Twitter at @museapphq or via email, hello at museapp.com. You can help us out by leaving a review on Apple Podcasts. And Tobias, loving all the work you’re doing on this, and if I’m not mistaken, you’ve got a book coming down the pipe here somewhere.

00:49:59 - Speaker 1: Am I correct? That’s right, yes, book is due out next year. It’s about outrage on the internet, how to navigate this new strange digital world, and how to make our digital tools better stewards of our humanity.

Discuss this episode in the Muse community

Metamuse is a podcast about tools for thought, product design & how to have good ideas.

Hosted by Mark McGranaghan and Adam Wiggins
Apple Podcasts Overcast Pocket Casts Spotify
https://museapp.com/podcast.rss