How might we address the misalignment of the outcomes of big technology companies and our broader societal objectives? Stanford Professors Jeremy Weinstein and Rob Reich share insights and recommendations from their book System Error: Where Big Tech Went Wrong And How We Can Reboot.
On the heels of OpenAI CEO Sam Altman's testimony before Congress, we are releasing this important conversation from 2021. Jenny sits down with Stanford political science professors Rob Reich, Jeremy Weinstein, and Nate Persily to interrogate big tech's role in society. How did we get to where we are today? What interventions in the near term, from creating new ethical norms to putting in place more democratic forms of corporate governance to regulation, might we pursue to bring big tech into better alignment with our collective social objectives?
Outline for the conversation:
“Jeremy Weinstein (JW): We need an approach to these systemic problems that cultivates an ethic of responsibility and tech that begins to think about how we change the corporate model and check the concentration of power in the hands of a small number of corporations and then thinks about building a set of public institutions that can actually govern a technological future.”
[INTRODUCTION]
[0:00:25] Jenny Stefanotti (JS): That's Jeremy Weinstein, Stanford Political Science Professor, and co-author of System Error: Where Big Tech Went Wrong and How We Can Reboot. This is the Denizen Podcast. I'm your host and curator, Jenny Stefanotti. In this episode, we're talking about the role of big tech in society, a very hot topic these days with the recent rapid advancement in AGI.
Our guests for this episode are Jeremy Weinstein, as I mentioned he's a political science professor. Among other things, he teaches a course called Ethics, Public Policy, and Technological Change, alongside his co-authors, Rob Reich and Mehran Sahami. The book is an excellent resource for those looking for a comprehensive, yet accessible review of big technology and public policy. Jeremy and I were close colleagues when I was a fellow at the d.school, and I can attest firsthand just how high the bar is for anything that has his name on it.
Jeremy's co-author on the book and co-teacher in the course, Rob Reich also joins us. Rob is a professor of political science. He's the director for the Center for Ethics and Society and the Co-Director of the Center on Philanthropy and Civil Society, both at Stanford. Jeremy and Rob are joined later in the conversation by Nate Persily. He's a professor of law at the Stanford Law School. Nate was a close advisor to Denizen during some work that we did with the Clubhouse community during the 2020 election. Nate also edited a book called Social Media and Democracy, so he has thought very deeply about the topics that we discuss in this conversation.
We recorded this a while ago, so we don't get into the current state of AGI, but I wanted to release this one now as it's a valuable backdrop for those discussions which we'll start having soon. We talk about how the United States orientation around regulating the technology industry led to today's state of affairs, challenges associated with the mindsets of Silicon Valley entrepreneurs and engineers. One of the things that's really valuable about this conversation is that we don't just diagnose the problem, we get into how we might address it.
Jeremy and Rob outline their ideas about establishing more ethical awareness and training within the sector, complimented by novel corporate governance structures that take into account constituents outside of the firm, and regulation that can address the challenges that we face. I push back against some of Rob and Jeremy's recommendations, and I present some ideas of my own. Things get a little fiery for a moment there, but we ultimately come to agree that all of our approaches are needed.
One of the things that has always defined the Denizen conversations is how excited I get about introducing guests to our community and audience. I can't tell you how much respect and admiration I have for both Jeremy and Rob. They are deep thinkers and dear friends. It was an honor to have them on. I always learn so much from them every time I'm with them and I trust you'll take a lot away from this one, too.
[EPISODE]
[0:03:08] JS: One thing I remember when being at Stanford is I really appreciated that there was an entrepreneurial spirit that was imbued into the mindsets of the students and they really felt like they could leave and do something in the world and innovate. In the book, and I think part of what you're observing is you saw how that, in some sense, became problematic. The book itself, I feel like, is really where I will be pointing people when they want to build a robust understanding about tech and society. Because I know you, Jeremy, well enough to know that this is very comprehensive and complete. The book covers privacy, it covers AI, it covers smart machines, it covers free speech.
It's an incredible interrogation of big tech and society and the trends and what has happened. What I really appreciate, it’s more robust than a lot of what you see out there, and it's accessible to people even though it is this deep and comprehensive review. What I really, really appreciate is that you take a swipe at what solutions to the problem could look like. Because I think there are a lot of people who have been playing this role of raising awareness and identifying the problem, but there has been a dearth of really meaningful solutions to the problem. I'm really excited to get into that, too.
Rob, you have an extraordinary skill for a very tight summary of a book that you've just written. Why don't you lay the groundwork and then Jeremy, why don't you layer on any pieces that you feel like we're missing and we can start there?
[0:04:40] Rob Reigh (RR): The place I'm going to begin is actually just this, continuing scrutiny of Facebook in particular, but social media more generally. We are exiting a 30-year period of an initial burst of tech optimism and even utopianism in which the people who got educated in computer science were the Davids trying to slay the industrial Goliaths. You created software in your garage, you tried to roll it out and get scaled very quickly.
The products and the dreams that were sold from Silicon Valley to young students were meant to be places where you could get rich and make the world a better place in the meantime, or at the same time, a kind of extraordinary vision that had huge effects on the talent that's gone into Silicon Valley.
Then of course, as we all know, the past five years has been a great backlash. What was meant initially to unleash human capabilities and spread freedom and democracy turns out actually to be something that might addict us to our devices, hijack our attention, scrape all of our data from us. Then in the meantime, pollute the information ecosystem with misinformation, disinformation and hate speech. In that entire 30-year period, the standard playbook was, well, let's let the geniuses inside the tech companies work their magic. If they make mistakes, there'll be an apology tour from the CEOs and the pledge that they're going to work harder inside the company to do better.
We're finally, I think, reaching a moment where instead of passively accepting as users, or citizens, the genius inventions of Silicon Valley founders and entrepreneurs, there's going to be just a broader social conversation, where some of the decision-making will be broken out of the tech companies. That's just a simple way of saying, democracy and our democratic institutions and citizens are catching up to the rapid pace of innovation where the Davids have now become the Goliaths. The biggest tech companies in the world are the biggest companies in the world, with the largest reach and the greatest scale. Mark Zuckerberg as the unelected governor of the speech environment to four billion people is an extraordinary amount of power that no single person should have.
[0:06:49] JS: Yeah. Can we just pause and can we underscore this, so that people really understand what this looks like? Because we've talked about it. When tech companies go public, they are often hesitant to go public, because they don't want to be controlled by the market, or subject to the whims of the market anymore than they need to be. What started to happen when Google went public, they issued dual class shares. What that meant was there was a different class of share for the tech founders and the company, usually just the people at the top and the public at large, often with a 10 to one voting ratio.
What this is allowed to happen is that the company goes public, but they still retain control, ultimately, control over the company in the hands of the founding team and the executives. In Zuck’s case, in Facebook's case, he controls the company. One person, if you didn't understand this, it's a profound, profound and stunning outcome. One person controls a company that has all of the pernicious effects on society that we know about that Facebook has. Okay, back to you, Rob.
[0:07:53] RR: Yeah. If Mark Zuckerberg were making the decision for the speech environment of a 100,000 people and there were a bunch of other competitors, that might be fine. The same decision-making control in the hands of one person with four billion people is a real problem.
[0:08:06] JS: Are there four billion users? I thought –
[0:08:08] RR: Nearly four billion people on Facebook. That's right.
[0:08:11] JS: Wow, I think it was 2.8 a year ago.
[0:08:14] RR: Yeah. I mean, one of the other interesting questions for all of us to ponder is how – I mean, I don't know what the popularity is of Facebook. Most people I know have at a minimum, a skeptical orientation to Facebook, but they paid almost no market price for it. They continue just to print money like a bunch of other tech companies do despite being so unpopular.
All right, separately, so that's the situation. End of 30 years, we're breaking some decision-making outside of the companies. Just a simple way to put this for anyone listening would be to say, Stanford created this extraordinary conveyor belt, a pipeline of talent in which you hopped on to computer science early in your undergraduate career. Then Silicon Valley and Sand Hill Road came knocking on your door by the time you were a junior or senior to try to recruit you. The path of least resistance was then to take these extraordinary technical skills and go into a startup company, or into a big tech company. That's great.
We saw a massive migration of students from the humanities and social sciences over to computer science. As the bloom has been coming off the rose of Silicon Valley, because we're now so aware of the negative social consequences that go along with some of the extraordinary benefits of big tech, we wanted first to stage a cultural intervention on campus. I, the ethicist, Jeremy, the social scientist and policy expert, and Mehran Sahami, our collaborator, who's the most popular professor on campus, teaches the introduction to computer science course, CS1 of 6A at Stanford. We got together to create a new class on the ethics, policy of technological change.
We taught that class then for two years to people in the tech industry, hosted by Bloomberg Beta up in San Francisco. After doing this, we decided we'd try to write a book for a general audience. That just sets up the basic idea. I want to be clear here, our book is not a polemic. We are not trying to tear down technologists, or tear down Silicon Valley. The idea is that there are values encoded into technological products. The value trade-offs that technologists necessarily face in designing products have to take account, now that they become so powerful, of the social conversation and democratic values that also attach to that great power.
One small example and I'll pass over to Jeremy. All of us likely use a messaging platform, whether it's WhatsApp, Signal, iMessage. Many of these have end-to-end encryption built into them. That's a great technological design to put a thumb on the scale of individual privacy. Individual privacy is a really important value. It's worth trying to protect individual privacy. There are other values that go along with privacy, or intention with it. You might remember, there was a lot of conversation about Apple's decision to scan photos that were uploaded to iCloud in order to try to detect whether there was child pornography, or evidence perhaps, of human trafficking on them.
In order to do that, you had to be able to inspect at some level the content of the photo. Putting a thumb on the scale of personal safety, or if it was a terrorist communication, perhaps, national security. Who should get to make the decision about balancing between privacy, personal safety and national security? The lesson that we come to is that it's not only technologists who have a voice in that, but a broader set of people, because these have impacts on all of us.
The book goes through a series of different particular technologies, or examples about value trade-offs, including in social media, in order to try to frame the question in a productive way and then point to some solutions in which we can rely not exclusively on the smart people inside of companies, but a set of decision-making procedures outside of companies that should improve the harnessing of social benefits and the mitigation of the harms to society. Jeremy.
[0:11:56] JW: Thanks, Rob. Thanks, Jenny, for having us. After watching Denizen unfold over my email invitations with just a mouth-watering array of discussion topics over time, it's really an honor to be here for this discussion of our book. Maybe what I'll add to what Rob said is just a bit on the meta-narrative of the book. The meta-narrative of the book begins with the recognition that new technologies are generating not just private benefits for individuals, but also social harms.
We talk about these through the lens from economics of externalities. Just as when a firm that's generating a product pollutes the water system, or pollutes the air, we're finding ourselves in a situation where it's not just about Facebook. Facebook is something that speaks to a much more systemic problem, where because new technologies generate effects in society at a very significant scale, things like toxic hate speech, or misinformation, or the focus on privacy that Apple brings to its products, or WhatsApp or Signal, that these generate social consequences that are currently not being addressed in a way that engages people outside of tech companies. That's point one.
Point two, why are we in a situation where tech companies are really generating significant social harms repeatedly with each new technology that's being created? There, we think the answer boils down to three ingredients.
First, an optimization mindset that's at the core of how technology companies do their business. An orientation towards finding the most efficient way to achieve some end, often without sufficient critical reflection on the end itself, or the proxy that's used to represent that end. Of course, we saw that on display in Frances Haugen’s testimony about Facebook's optimization metrics and ultimately, what was the cost of Facebook choosing to prioritize meaningful social engagement, or time on the platform, or whatever it might be.
The second ingredient is the very structure of the venture capital industry that takes these design choices that are embedded in new products and inside technology companies and turns them into things that can generate societal harm by virtue of the push for market domination and scale very quickly, as a way of finding your way to that next unicorn that gets you to the return that enables you to maintain the investors in your fund.
Then of course, the third ingredient is that over the last 26 years of the birth of Silicon Valley and the tech industry, government has been entirely absent. Entirely absent has been a function of choice. That is a deliberate choice made by the Clinton administration, under the leadership of Bill Clinton and Al Gore to craft a regulatory oasis around big tech that not only did the things that we know CDA230 enabled, which is protect platforms from legal liability for user generated content, but more broadly, dramatically scaled back the role of government in providing oversight with respect to data collection and thinking about government's role in creating guardrails around this new industry.The idea was that if we were going to win the race to create the Internet super highway, technology platforms of the 21st century, these industries needed to be fully protected from oversight and regulation.
Three ingredients that get us into the present moment. That means that then the solutions that we need to explore are not only solutions to the particular harms that are in evidence, right? Data privacy, the lack of access to data portability, algorithmic auditing and the like. Also, we need an approach to these systemic problems that cultivates an ethic of responsibility in tech that begins to think about how we change the corporate model and check the concentration of power in the hands of a small number of corporations, and then thinks about building a set of public institutions that can actually govern a technological future.
Because right now, we're focused on the consequences of technology today, which are the consequences of the last decade, but we need a set of institutions that are able to put us in a position over the next decade to deal with the consequences of the next set of technologies.
[0:16:15] JS: There's so much to talk about. As you know, I don't get into the more superficial problems, we are really interested in the systemic components. We're interested in the specific problems insofar as they help us relate to and understand both the systemic consequences of the currency and the systemic opportunities of alternative states. We talk about how social media degrades democracy and we talk about how blockchain might enable new governance structures and new corporate structures. This is very much in our wheelhouse.
I want to dig in a little bit more, Jeremy, to your comment that you were just saying about the regulatory environment in that particular moment in time, the regulatory oasis for tech. Because it does stand in contrast to the EU and their general approach to this. Do you feel like it was a consequence of the mindsets of those administrations and their belief, analogous to the belief in Silicon Valley that tech was this force for good in the world and they just wanted government to get out of the way and the general sentiment at the time was for the government to get out of industry’s way? Or was there something at play in terms of just corporate capture?, Obviously, these are interrelated. But how much do you think it was just the predominant mindset at the time, versus actual political influence by virtue of campaign finance and lobbying?
[0:17:24] JW: Yeah. I don't think it was corporate capture at the moment, although I think over time with the growth of these companies, we've found ourselves into a position of corporate capture. You have to think back to the 1990s, which were 10, or 15 years into the deregulatory moment that we experienced not just in the United States, but also in Europe. An orientation towards minimizing the role of the state and embrace of markets and a disparagement of the role of our political institutions in providing the foundational guardrails that structure markets. Of course, this played out not only in tech, it also played out in the finance industry. Of course, we saw the consequences of that in the Great Recession.
[0:18:09] JS: It paved the way for the corporate capture instead.
[0:18:11] JW: Exactly. You had this deregulatory moment. In that deregulatory moment with that mindset, there was absolutely an intentional view that removing restrictions and minimizing oversight would be the best things that would allow capital to flow and capital to flow to potentially transformative, private sector-driven technologies that would build up our capability to communicate with one another. Email was the beginning, platforms then followed. You can very much validate that moment. Of course, the effects in terms of our ability as the United States to dominate the technology revolution are all around us to see.
The question though, is as social harms became more and more evident, why did government continue to operate with its hands tied behind its back? That's a story about the race between disruption and democracy that's unfolded over the past 200 years, first with the telegraph, then with the telecommunications industry, now with the Internet. The EU just simply woke up to the importance of guardrails five years sooner than the United States. It's not that the EU had a fundamentally different orientation in the beginning, but the EU with its lean in the direction of a regulatory state, the opposite of the United States, got into the game sooner.
[0:19:31] JS: I might push back on that just a tiny bit, because if you just look across the board in Europe, there are different sets of policies and different structural economies. I just think there's a fundamentally different culture. Capitalism has captured the US far more than it has captured Europe. If you look at a country like Germany, where the workers are obliged to sit on the board and representation on the board above a certain size, or you look at countries like the Scandinavian countries where you have just a fundamentally different redistribution of the value that gets created. I feel like that's cultural also.
[0:20:00] JW: Oh, for sure. No doubt about that. You're talking about the orientation of the state to workers. We can talk about the structure of social welfare states. Europe has a different history of the relationship between markets and political institutions than the United States does, without a doubt. Of course, new regulations, 75% of new laws and regulations emanate from Brussels, right? Emanate from this new structure of the European Union. It was in the aftermath of the Snowden revelations that the EU got very focused on issues of privacy, right? Because part of the Snowden revelations was not just the violations of privacy of citizens of the United States, but also foreign leaders and foreign citizens, including Chancellor Merkel, right? The US went through this whole exercise of apology to foreign governments that had been spied upon in the context of the Snowden releases.
There is a cultural element there, because privacy is a value that is prized to a higher degree in Europe than it has been historically in the United States. The EU tapped into the combination of the Snowden revelations, the collaboration between the US government and technology companies, and the concerns historically about privacy in a region that had states that were communist, states that were violating people's privacy in systematic ways to basically drive GDPR. GDPR took a couple of years to get off the ground and then became the new regulatory standard for the world.
[0:21:29] JS: Rob, you made a really important point in New York. Actually, for me, that was one of the most insightful aha things that I pulled out of that whole conference. You painted the picture of the way the government got out of the way, the mindsets of techno utopianism, the belief that the tech industry would do the right thing. You had something incredibly potent to say about the predominant philosophical orientation of Silicon Valley that led to that. I'd love to make that point, because I think it's really important.
[0:21:59] RR: Sure. I'll put it in a reductive way, but for the sake of emphasis, I think it actually does capture something important. It's no secret that the founders of many companies in Silicon Valley have a libertarian orientation, or a techno libertarian orientation, even more specifically, if you think back to John Perry Barlow, the Declaration of the Independence of Cyberspace. The digital realm was beyond the material realm, nation states don't matter anymore. That was the founding ethos of this libertarian orientation, which of course is generally suspicious of and opposed to government and government regulation.
Then the general outlook, or mindset of your average engineer is an optimizing mindset. That's what engineers bring to the table. It's this extraordinary ability to optimize a solution to some computationally tractable problem. When you combine the libertarian mindset of the founders with the optimization mindset of their employees, what you're optimizing for in some sense is libertarianism, or the minimization of government. At the extreme of this, there's a certain kind of founder who doesn't have any principle attachment to democracy itself, or even an optimizer who is suspicious of democracy, because they look at the actual government and think it's not optimized at all. In fact, it's so slow and broken and dysfunctional.
I tell a story sometimes about having been invited to a dinner four or five years ago with a bunch of people, big names in Silicon Valley, everyone here would recognize. The topic of conversation for the evening was what would it mean to create a place on earth that was devoted to the maximal progress of science and technology? The conversation went around the table and people talked about, well, is it going to be an island, or a different plot of land? Once we have a plot of land, that's good. How do we decide who gets to be a citizen? A bunch of people at the table said quite sincerely that the appropriate and best citizenship test for this place should be the Google hiring test.
Anyway, midway through the conversation, I raised my hand and I said, "I'm the political philosopher here. What's the governance arrangement of this place? Is it a democracy or what?” The almost uniform response was absolutely not a democracy. That holds us back. That inhibits rapid progress. There is a benevolent technocrat in charge. If the only attachment you have to democracy is if it delivers maximally good outcomes, of course, you're going to be disappointed in democracy, because it's not designed for that in the first place. It's designed to referee difficult tensions and different preferences and values amongst citizens who are equal. It's a system that's designed to put guardrails into place to prevent the worst outcomes, not to maximize the best outcomes.
This optimization mindset and the libertarian mindset of the founders is part of the problem with bringing our collective voices as citizens and our public institutions into play, because of the general ethos of Silicon Valley, but also because of course, some of the very products misinformation and disinformation and potential effects of polarization among them have created some of the dysfunction we now see in our national politics.
[0:25:07] JS: I want to interrogate this question a little bit more. Jeremy, you talked about how there was this mindset at the time when Silicon Valley was really taking off about getting out of the way. There was an acknowledgement still, I mean, neoclassical economics and even in neoliberal perspectives that there are externalities and the role of government is to internalize those externalities. Can you just reconcile those two perspectives? The way that I tend to think about it is that there is just this fantastical thinking that that policy could actually get done and it just didn't get done. I don't feel it's true that at the time what you laid out, Jeremy, that there wasn't an understanding that there were externalities and that the tech companies were not going to be the ones doing something about them. Help me understand that piece.
[0:25:49] JW: I guess, I'd say a couple things. First, we were on KQED this morning and a millennial called in and the question was, why is everyone being so tough on Facebook, holding Facebook to higher standards than they would hold any other company with respect to dealing with social consequences? What that question raises, or makes transparent to us is how little people generally understand the degree to which the regulatory state undergirds the market.
When we play this out with our students, we say, “Did you drink milk this morning?” A student will say, “Yeah, I drank milk this morning.” “Did you get sick?” They're like, “No, it's fine.” “Why do you think you didn't get sick from milk? Is it because the profit maximizing dairy producer on its own decided to make sure that your milk was 100% safe for you, or do you think that they're a set of regulatory guardrails?” The same with the clothes that we wear that are imported. The same with the way that we've structured our roadway system, right? So that we know what side to drive on and we know what to do when we drive by schools.
All of our markets are undergirded by a set of regulatory guardrails. There are some places that have more robust regulatory guardrails than others. Think about air travel and the oversight of the construction of airplanes. Others where there is less regulation. I would say that at the moment that the regulatory oasis was crafted around big tech, I would say that the social harms were not evident, right? At that moment, this techno utopian view that I think really carried all the way into the Obama administration. Tech was a force for good. That tech was actually on the side of the progressives. The right wing had Wall Street and the left wing had Silicon Valley, despite its libertarian instinct.
I think there was this naive view about tech's potential and its fundamental goodness. The challenge is that it's taken us a decade plus to pivot out of that techno utopianism to a deep recognition that the societal harms, the externalities of technology are really no different than externalities that we see in other domains, but they demand a regulatory response. That brings us to corporate capture.
[0:28:08] JS: They're no different. I might argue that they are different and that the scale and the implications of them are far more profound.
[0:28:17] JW: Maybe what I mean by no different is conceptually, they are quite similar, the role of the state to intervene in the private market to address negative externalities. That's conceptually similar. The scale of the harm is profound.
[0:28:31] JS: I think there's something categorically different, because you regulate at the level of the state and this is something that happens at a global level, right?
[0:28:41] JW: For sure. But of course, we've got lots of global public goods that we have to grapple with in the environment and the protection of the planet is something that involves some mix of national action and international action. In the last chapter of the book, when we reflect on the role of democracy, we say, look, the two existential crises of the 21st century for democratic institutions are going to be number one, dealing with the health of our planet and number two, adapting society to a place where democracy plays a role in governing technology.
[0:29:13] JS: Rob, did you want to add something?
[0:29:14] RR: Yeah. I mean, I just want to communicate that I feel like I come down between the two of you here. There are some genuinely new challenges where a simple off the shelf approach of government action to contain, or internalize externalities won't work, but a bunch of other places where it will. I'll give everyone an example that comes to mind for me. Let's say you think, maybe Tristan Harris does, that part of the core problem is the digital ad-driven model of so many Silicon Valley companies, where the incentive structure in order to drive revenue is to compete for people's attention, sell it off to advertisers, do it all in a hyper-personalized way and just keep counting the revenue dollars.
Well, if you believe that that's the core problem, then economists have long taught us that there are pretty simple strategies that government has at its disposal to put a disincentive in place for continuing with an ad-driven model and incentivize other alternatives. Impose, as the Nobel Prize-winning economist, Paul Romer has suggested, a graduated digital ad tax so that the more revenue you get, or the greater proportion of revenue you get from digital advertising, the greater the tax rate you pay, so that you have an incentive now to experiment with other revenue strategies, say Wikipedia, a donation model, or Netflix, a subscription model. They all are alternatives, and imposing a tax as a disincentive is hardly a novel idea.
When it comes to say, free speech in section 230, that's to my mind, significantly more complicated and navigating between the importance of the First Amendment and freedom of expression and how to contain misinformation, disinformation, hate speech. It's a far more complicated problem than simply imposing a tax on something. I'm in between the two of you.
[0:30:56] JS: I want to turn to solutions. Your approach to solutions were twofold, which was one, more novel government structures for the companies themselves, decentralized control from the dictator Mark Zuckerberg to constituents outside the firm. Now, Facebook's Oversight Board, I think is an interesting example of this. Facebook established an oversight board. It is completely independent of the company. It's endowed by the company, but it has no financial stake in it. The board is composed of individuals that represent basically the planet, right? The user base of Facebook, which is the planet.
There are certain decisions related to content moderation, like where there's a dispute over content moderation, and that decision goes to the oversight board. It's binding for Facebook. It's a very important example, and I'm very proud of Brent Harris on what he's done here, of taking power outside of the company for some subset of decisions. You can imagine that happening for a much wider set of things. This is really stakeholder capitalism, you could say in action, right? Where decisions are made by stakeholders that may have no financial stake, whatsoever, in the firm. As I recall, you're outlining a revolution, frankly, in corporate governance for these companies, in conjunction with regulation. That's your take on the solution side, correct?
[0:32:10] RR: Yeah. I mean, I think we’d add just one additional element that Jeremy already touched upon, which is also developing a different kind of ethic of responsibility, or set of professional norms amongst the group of people who go to work inside tech companies. The simple comparison here is if you work as a lawyer, you work as a healthcare professional, you work in the biomedical domain, there's just a deep institutional footprint of professional ethics and professional norms that organizes your behavior. With sanctions, if you happen to violate the behavior, your license can be withdrawn, you can be debarred, or disbarred from practicing. There's no equivalent within CS or AI, and so, we'd add that to the solution set.
[0:32:50] JS: Okay. I think about this a lot, and I think about solutions. I'm really interested to dive into this with you now. Let me start with the regulation question. Why do you believe that the governments, when we talked about earlier the fact that it started with mindsets and it led to corporate capture, why do you believe in an environment where there is so much corporate influence on government that that would not continue to inhibit the regulatory solution that you're proposing?
[0:33:18] JW: I think we have witnessed an absolute transformation in the relationship between our democratic institutions and tech companies over the last two years. That's where I get a lot of confidence and faith that transformation is possible, that we're entering a policy window where change is going to happen. Because two years ago, we were all watching the video of Senator Hatch's interaction with Mark Zuckerberg. When he asked Zuckerberg, how does Facebook make money? Your product is free. We all laughed, right? Nervously thinking this is embarrassing. An 80-plus-year-old senator who doesn't understand the ad-driven model that drives Facebook.
Two years later, you have house judiciary committee investigations alongside the coordinated action of the Federal Department of Justice and 43 states attorney generals focused on issues ranging from competition and antitrust to data portability, to the conversations that we saw about algorithmic amplification. Government is upping its game, and we see that not only in Congress, but also on the Executive Branch.
The second thing that I just say that gives me confidence is that Washington DC, where we see paralysis, we see polarization, we see all of these challenges, is of course not the only locus of regulatory action. We see the EU as the regulator of first resort. You see California follow the GDPR with the CCPA. We have multiple arenas in which regulatory action can unfold, big markets that are beginning to exercise their influence in agency. At a time when the social harms become so salient that elected politicians on both sides of the aisle see the political wind shifting and see the value of lining up against big tech. I think the power of lobbying wins.
[0:35:05] JS: We believe in government, because there's this cultural shift where for both sides of the aisle, this is the right thing to do, correct?
[0:35:15] RR: Yeah. I mean, I would perhaps be a little less dramatic than that. I'd say, you had evidence of a bipartisan agreement in response to Frances Haugen's testimony that regulation and action from the federal government is necessary. There are going to be corporate incentives, company incentives. I mean, take the CCPA, which Jeremy just mentioned. It is in no tech company's interests to have 50 different states pass 50 different privacy laws, where complying with 50 different kinds of regulations is the landscape. It'll be much better and companies will push for there to be federal privacy legislation. That's a productive dynamic to be in place.
Beyond that, the idea here is if we look back in history, there's a frequent race between marketplace disruption that's driven by private initiative and private capital that brings exciting new frontier emerging technologies to market. When those frontier technologies become dominant in the market, we often see concentrated and consolidated power and social harms become more visible. Silicon Valley's strange, principled, ahistorical orientation of “Don't let the past shackle your vision by actually learning any history” is here going to be a problem for Silicon Valley, because when we look in the past, we see sensible regulations coming out of the federal government from all kinds of emerging technologies that consolidate their power in society.
[0:36:41] JS: Okay, here's my second issue with regulatory solutions to this problem. The pace that tech moves at, relative to the pace that regulation can move. I can make a secondary statement about the capacity of government, but I know that your answer is already going to be yes and we need to beef that up. That's not an issue. I still don't see how you get around the fact, like tech moves so fast and government moves a lot slower. I'm curious about your thoughts on that.
[0:37:10] JW: I mean, I think, Jenny, you're naming all the reasons that historically just democracy lags behind developments in the market. That's a relationship between the state and the market that we have adopted in the United States. We don't have an orientation like the precautionary principle. You could imagine a much more European orientation that says, no tech hits the world until we've figured out all of its potential harms. I'm not sure that that would emerge out of our democracy is the ideal way to approach these problems. I think, instead of just recognizing that democracy is slower than technology and assuming that that renders regulation irrelevant to the problem.
[0:37:54] JS: No, I'm not saying irrelevant. It's just how do you – Yeah.
[0:37:57] JW: Yeah. Democracy needs to move faster. We've operated for 25 years with our political institutions, with their arms tied behind their backs. We need to do a much better job of observing those harms and responding to them more quickly. We see an example of how the institutionalization of experimentation and learning is happening in partnership between private companies and the regulatory state in the domain of autonomous vehicles.
Autonomous vehicles weren't rolled out in the United States in the same way that Facebook, or Twitter was. Much more consistent with Nicole Wong, who was the former Deputy Chief Technology Officer of the United States and had been at Google and other big tech companies before, she says we need a slower food movement for tech. With autonomous vehicles, you basically create regulatory sandboxes, environments in which new technologies can be tested. You build extraordinary social science and behavioral science teams to look at the effects in real world environments. What do we see with autonomous vehicles, except people sitting in the backseat fast asleep, when they're meant to be in the front seat and awake. We see the consequences of that. Then the regulatory state gets involved and says, “Well, we're going to need to create a set of standards before these things can really be scaled in the market.”
[0:39:12] JS: Yeah.
[0:39:13] JW: We're going to need to build out that architecture and it's going to need to be a flexible architecture, because a social media platform and an autonomous vehicle and an AI-powered machine production facility, these are not one and the same. There's not one agency that can necessarily oversee and carry out this mandate in the same way that the FDA does for medicine.
[0:39:33] JS: I'm going to bring Nate up here.
[0:39:34] RR: Nate, you should talk about your model legislation, because we're just talking about whether the state has the capacity to rise to the challenge here, the actual existing public institutions of the federal government. You put out this piece about some model legislation, not on a regulatory front, but on a data access front that I think is super important.
[0:39:52] JS: Nate is the editor of a book On Social Media and Democracy. I'd love to hear your take.
[0:39:57] Nate Persily (NP): Well, so the government is broken in the US and tech is broken. You do have to choose your poison. But there are better and worse ways to try to tackle these problems. We've got to be specific about which problem we're trying to solve. Now, as Rob was saying, first of all, I do regard the legislation that I released as a form of regulation, even though it's about researcher data access, it is about trying to figure out what the hell is going on inside these firms, right?
The reason that Frances Haugen's testimony has been so salacious and so interesting to a lot of people is that it offers a rare window into the inner workings of these companies and what research they've conducted. This should not be a rare event. We should have a steady stream of information coming out of these companies that's analyzed by independent folks in a privacy protected secure way, and I can get into how one does that. It's quite simple in that the FTC would be empowered to force Facebook and Google and similar companies to allow for independent researcher access in a controlled facility controlled by the firm. Then there'd be vetting of their findings, vetting of the researchers and then publishing of results.
The key here is that the only people in the world who have access to analyzing and making inferences from the data inside these companies are the employees of the companies themselves. We need to have some independent group that would do it. Let me also say, this is not like a luxury good for pointy-headed academics. The point is that if you allow for this level of transparency, it will actually affect the behavior of the firms themselves –
[0:41:33] JS: Yeah, absolutely.
[0:41:34] NP: - because they will then know that someone is going to be watching them and can potentially figure out what they did. That's where I've been focused, because I've been working on this issue of data access for four or five years now. There's plenty of other areas where you could have regulation, whether it's antitrust privacy and the like.
[0:41:52] JS: This point, Nate, is a perfect bridge from the regulatory piece to the governance piece. We talked about these two critical areas to address the problem at a systemic level, which is regulating the companies differently and having different corporate governance structures. We touched on the oversight boards so that people got a sense of what that might look like. I'd love to hear Rob and Jeremy. What is the thinking that you outlined in the book in terms of what these novel corporate governance structures could look like that take power out of the container of the firm and give it to society, which is profoundly influenced by the activities of the firm? Obviously, the answer to that question is very different on a case-by-case basis, which raises the question of who decides what the right governance structure is for a particular tech.
[0:42:33] RR: Right. Well, I mean, I just invite everyone to think about your initial example, the one we talked about the Facebook Oversight Board, as a worthwhile experiment in extra company and non-governance arrangements that we should just strive to have more of. Think again historically here. I think back to the creation of the Motion Picture Association of America, which is an industry self-regulatory approach for giving ratings to movies, the G, PG, R rating system we know. That's not a legislative mandate from Washington, D.C. on movies, but rather, a corporate-led model at self-regulation. We should strive to find examples of those.
We should also look to find ways that Silicon Valley companies often do for fact-checking purposes work with outside bodies, not formal governmental bodies. They also look to foreground authoritative information in different cases around the coronavirus. We often point to the WHO, where there are places that. Those are all worthwhile attempts to break some decision-making out of the company and look to independent actors that aren't government.
[0:43:43] JS: Who, in those cases, like in the Facebook Oversight Board, they made that decision, right? The company is making that decision. It's fundamentally disincentivized to seed power to others.
[0:43:54] RR: Yup. Correct. Well, I mean, Jenny, I'm suspecting what you would like to see more of is actual changes to the corporate code of how it is that for-profit company and LLC, or registering in Delaware as a corporation works so that we get more things like a B corporation, or stakeholder capitalism rather than shareholder capitalism. Those are worthwhile experiments too, but those are also good, of course, for things way beyond the tech industry.
At least in our book, we don't identify capitalism as such as the target. If we solve capitalism, then we can solve the problems with the tech industry. We just think there are many, many, many things that we can do, even things that are low-hanging fruit, beginning with something like Nate's model legislation, federal privacy legislation, independent algorithmic auditing, sensible approaches to thinking about how to handle the job dislocation of automation. There's a whole roster of them in the book.
[0:44:46] JS: My challenge with the solution set that you offer is that it still has these deeply opposing forces of capitalism and the systemic outcomes that we ultimately care about as a society. In case you're going to teach ethics to the people that go into the tech company so that they will have a moral backbone that will supersede whatever profit, or extraction incentive they have because of their stock option package. I think that's a fragile way to address that fundamental tension, because what has happened is capitalism has essentially co-opted humanity for its own benefit.
There is this deeply internalized capitalism that drives this perpetual growth, which is partially baked into the VC model in terms of the pace at which it wants to grow, to your points that you made earlier, Jeremy. It's fundamentally baked into capitalism.
[0:45:42] JW: I very much appreciate where you're coming from. Of course, when you're writing a book and you're focusing on problems that you have to draw a circle around something because of course, we could say, there's a deep and profound rot in democracy. Unless we fix that problem too, we're not going to make progress. You're absolutely right to identify core aspects of capitalism as problematic. Let me say two things. The first is that, yes, while you could pick up the notion of creating professional ethics for technologists and say, on its own, it's a fragile foundation, the argument of the book is that that's one of three pillars of a response to the problem.
[0:46:18] JS: I understand. I understand.
[0:46:19] JW: On its own, it's weak, but it needs to be a part of a solution space. Then the second thing just to say is that I want to push back on you and hear you talk about Europe again, because you brought Europe into the conversation previously, which is the relationship that we have between capitalism and democracy in the United States is unique to the United States. The relationship between capitalism and political institutions is different in Scandinavia. It's different in France and Germany.
[0:46:46] JS: That’s true.
[0:46:47] JW: Big questions and economics for a long time and political science have been, well, why doesn't the United States have a social welfare state that looks like Europe, right? Because Europe is a place that guarantees that basic standard of living, that has a much healthier relationship between the market and potential environmental and social consequences, that creates a pathway for upward social mobility that doesn't exist in the United States. You didn't need to burn down capitalism to achieve a better balance of these values. You just created a much stronger role for the regulatory state and the social welfare state, which is something that we could choose, but have chosen not to choose in the United States.
[0:47:27] JS: Yes, it's a better balance. This is why I'm pleased with the trends in capitalism towards, to Rob's point, B-Corps and inclusive capitalism, and stakeholder capitalism. I entered this conversation in this inquiry that I've been on for a year plus, holding the central question of, but can we evolve capitalism, or do we need to challenge its fundamental tenets? Yes, you have a better variant of it in Europe, but we still have climate change.
[0:47:51] NP: I mean, while we wait for the proletarian revolution, though, we still have to solve the problems –
[0:47:56] JS: A 100%. A 100%. Let me offer a proximate solution that is different from yours. I think that people desperately want to get off Facebook and they want an alternative to Facebook. Let me give the Facebook example, right? We understand that Facebook is delivering a product that people appreciate, and even though they understand the systemic implications, there's no alternative for them that doesn't involve too much of a tradeoff. They're not willing to get off Facebook altogether. Very few people are. The same goes with Amazon.
The default is I'm used to picking my phone out of my back pocket and then a couple of clicks, getting what I want. Jeff Bezos and Amazon, all the things unethical that it does, my moral compass around that doesn't offset my consumer convenience that I'm accustomed to. Like Elon did with Tesla. If we can we build equivalent versions of that, which don't have all the fundamentally problematic incentives and just move everybody over and just suck big tech dry of its resources. That would be my different take on how to address this.
Look at, for example, Signal versus WhatsApp. One of them is supporting this grotesquely extractive variation of capitalism that's ripping apart the fabric of society. The other one represents what I believe is this future model of governance of a firm, where purpose is actually primary instead of profit.
[0:49:15] JW: I got to challenge the Signal story.
[0:49:17] RR: Yeah, me too.
[0:49:18] JS: Yeah, go for it.
[0:49:20] JW: Signal is awesome for privacy. It's not awesome for all sorts of other things that I care about in the world, like preventing governments from organizing and carrying out mass violence, or traffickers from organizing, to carry people across borders, or terrorist groups to coordinate their activity. If the only thing you care about is privacy, Signal is awesome. Why do the founders of Signal get to decide for the rest of us that privacy is prioritized over every other value?
[0:49:47] JS: I appreciate that.
[0:49:47] JW: It’s no different than any of the other companies that we're talking about.
[0:49:51] RR: Similar spirit about skepticism that Signal is an example, but for different reasons than Jeremy. I might be off in the details here. Jenny, correct me if I'm wrong. The folks who set up Signal, one of the things that's interesting about it is that it's owned by a foundation and then run for a social mission, this particular mission of ensuring privacy in our messaging communication. Effectively, it's a parasitic organization off people having first made a mountain of money in their ordinary capitalist life.
[0:50:22] JS: This is great. This is why I was so excited to have this conversation, because if there was a difference between us, there was some middle ground to be had. Jeremy, I really appreciate that, because I didn't see Signal through that lens before. Now we're in the territory of trade-offs, right? The reason Signal gets me excited is because it's an LLC that's owned by a 501(c)3, so it is the suite of corporate structures called Steward Ownership, where it's very clear that purpose takes precedence over profit. You can still use profit to scale, but it's not extracted in the way that it is in a typical for-profit company.
Jeremy raises the point that I may have values about extractive capitalism in the world, but I also have values about privacy in the world. In the case of WhatsApp versus Signal, I may choose Facebook over the privacy issue. That's a great example of what I think ultimately, when I think about almost a decentralization of social value creation from the government to the firm, is this – it's this. It's a proliferation of companies that represent different value structures, where consumers and employees and investors get to vote, essentially with who they're buying from and who they're working for and who they're investing in, for what set of values they care about.
Imagine if instead of there just being Signal, where you had to do the binary trade-off between WhatsApp and Facebook and the privacy issue, what if there are proliferation of messaging apps where there would be a competitive market that would represent those heterogeneity of value preferences of society? Are you guys tracking me?
[0:51:58] JW: I think we're not actually that far apart from one of them. What’s that Nate?
[0:52:01] JS: Nate, what do you want to say?
[0:52:02] NP: No, I was going to say that there are dozens of messaging apps, right? I mean, it depends on how we define it. I don't mean just things like text messaging and WhatsApp and Messenger and stuff like that, but Discord and Twitch and all kinds of things where you can engage in messaging. Ultimately, the question comes down to, if you're trying to create that world, or any of these other worlds that we're talking about, you need some source of authority and power. Either it's going to be the companies themselves, it's going to be the governments, or it's going to be some other institution that you come up with. Signal exists. There could be a hundred other experiments like that. Most of them will fail, right?
The question is, is there a set of legal rules that can incentivize the creation of certain types of corporations that you think are salutary, or do you need rich investors to just start experimenting?
[0:52:51] JS: This is a great example of where if you're a system strategist around this stuff and as a government, you don't want big tech companies that have fundamentally misaligned values with society, then you have this very simple container, which is corporations that have these governance structures. It’s a marginal solution, but we have it now with public benefit corporation. It creates a very clear subset of the economy where the government can then put in place incentives, whether that be tax incentives, or procurement preferences or whatever, for companies to take those forms. That's a way that policy can incentivize this fundamental structural economic reform.
[0:53:24] JW: I just want to say two things on this point. The first is that the pathway to that diversity of offerings that you're describing is enabled by the whole set of regulatory reforms that we described in the book. The idea that we're going to get to this diversity of offerings in the absence of rules about system interoperability, portability, antitrust enforcement, the policing of mergers and acquisitions is fanciful. The reason we're not seeing the emergence of a diversity of firms and it's because the policy guardrails are not in place –
[0:53:59] JS: No, I appreciate that. Yeah.
[0:54:01] JW: - to create the space for that. The second thing that I just want to say is that I feel like you're putting the non-profit motive on a pedestal as if somehow, because an organization is non-profit, that it brings some legitimacy to making decisions that potentially affect lots of folks. I wouldn't say that there's any special legitimacy to the non-profit motive and Signal is the perfect example of why, which is that if you're building a technology not-for-profit, that makes it impossible for the parts of the surveillance state that protect things that we care about, like protection from terrorist attacks, prevention of child trafficking and human exploitation to do their work, then it's no different than a company making that choice as the whole does with the outcome.
[0:54:47] JS: Yeah, totally. This gets into why when you look at the themes of the Denizen conversation, we have themes around culture and narratives and spirituality and consciousness. For me, non-profits don't have these fundamentally deeply misaligned incentives, but it also has to be within a cultural value or spiritual, frankly, container where humans are driven to purpose and collectively oriented and individually oriented.
[0:55:12] RR: I mean, I have to say that there's something deeply appealing about how you just described this and maybe the common ground between us at the end of this conversation is that there's nothing I can think of, to say at the moment, about why experimenting in the way that you're proposing, Jenny, with a set of different corporate forms that have a better alignment with collective or shared value. Sign me up for those experiments and I want to see how they run. I'll add just one small, maybe call it a meta point or a foundational point here. Just a small piece of skepticism. Not about the experiment, but about the big picture.
There's something in the way that you described it, Jenny, and at least that I sometimes interpret it as there's a holistic vision. The spiritual transformation will bring about a coalescence of intentions of will for the collective benefit. I think I'm in favor of a world in which there's pluralism and competing agonistic orientations in which a balance of powers are in tension with each other, rather than some holistic, or unity across corporate forms, government forms, non-profit forms.
[0:56:23] JS: A 100%. No, I didn't mean to imply a universal point of view. I mean, I haven't really gotten into it, but what is the future of the nation state? Just the challenges of aggregating preferences at various levels. I might just intuitively feel like you want to decentralize as much as you can, so that people's preferences don't have to be compromised, right?
[0:56:43] RR: Yup. Agreed.
[0:56:45] JS: Yeah, I think you would have a heterogeneity of preferences, around spirituality and cultures. This is why I think it's interesting to look at that, getting expressed in a competitive environment, the level of the firm. I appreciate your point, Jeremy, because a lot of my attention so far has been on really interrogating the economic piece of it. There's a whole inquiry around trying to understand what it looks like in terms of an end state that might work. Then there's a question of how do you get there? I really appreciate, Jeremy, you making the point that getting there is not possible without the regulatory things that you're suggesting here.
I mean, I think that part of the challenge is to your point, we have to create a container that's reasonable to address, right? At the same time, a system is so interrelated that what are you leaving out?
[0:57:32] JW: We need an end state that's desirable, that reflects the kinds of values that we're trying to integrate. We also need things to do in the meantime.
[0:57:40] JS: No, a 100%.
[0:57:40] JW: We need to create a pathway. I think our book tries to straddle these two worlds with point solutions to some of the problems of the present and then a pathway towards a different technological future. I think in this conversation today, but also the broader work you're doing with Denizen, you are really tackling the kinds of questions that would never get a hearing in Washington, right? They're too big. They're too meta with respect to the future role of the state and the future structure of our economy.
In the same way that Rob says, sign me up for experiments, part of the way that we're going to help people imagine a different future is by experimenting and demonstrating what's possible. Because otherwise, people just internalize all the constraints.
[0:58:21] JS: I feel like we've tread a lot of ground. I know that your book has just such deep, rich, potent information about tech and society.
[END OF EPISODE]
[0:58:31] JS: Thank you so much for listening. Thanks to Scott Hansen, also known as Tycho, for our musical signature. In addition to this podcast, you can find resources for each episode on our website, www.becomingdenizen.com, including transcripts and background materials. For our most essential topics, like universal basic income, decentralized social media and long-term capitalism, we also have posts summarizing our research, which make it easy for listeners to very quickly get an overview of these particularly important and foundational topics.
On our website, you can also sign up for our newsletter, where we bring our weekly podcast to your inbox, alongside other relevant Denizen information. Subscribers are invited to join our podcast recordings and engage with the Denizen community in our online home, The Den. We're partnered with some incredible organizations at the forefront of the change that we talk about. We share announcements from them in our newsletter as well.
Finally, this podcast is made possible by support from the Denizen community and listeners like you. Denizen's content will always be free. Offering Denizen is a gift models a relational rather than a transactional economy, enabling Denizen to embody the change that we talk about on this podcast. Through the reciprocity of listeners like you, that we are able to continue producing this content. You can support us, or learn more about our gift model on our website. Again, that's www.becomingdenizen.com. Thanks again for listening and I hope you'll join us next time.
[END]
Our weekly newsletter is the best way to stay abreast of our inquiry and ongoing events.