How Frances Haugen Is Empowering Future Tech Leaders

Trends


(To receive weekly emails of conversations with the world’s top CEOs and business decisionmakers, click here.)

Frances Haugen became known around the world in 2021 as the whistleblower who disclosed tens of thousands of pages of internal Facebook documents revealing what the company knew about issues ranging from how its platforms were harming teenagers’ mental health to how they allowed the spread of misinformation. Now she has her sights trained on equipping the next generation of tech leaders with tools to make the world a better place.

In the year since Haugen blew the whistle about the company, which has since rebranded as Meta, discourse surrounding Big Tech has been increasingly dominated by scrutiny over the ways in which some of the most significant technological advances of the past two decades are harming vulnerable communities, stoking division and weakening democracy.

Read More: Inside Frances Haugen’s Decision to Take on Facebook

Now, social media’s biggest players are facing growing calls for both accountability and regulatory action—a reckoning that’s focused on how to blunt the effects of harmful platforms and products after they’re built. But what if the engineers and developers behind those innovations had reflected on potential harms at the ideas stage rather than working backward to address concerns after the fact? What if those technologies were never engineered in the first place?

These are the types of questions that Haugen is working to bring into the classroom as part of not only engineering curriculum, but broader education. As new technologies change almost every facet of modern life, Haugen is developing simulated social networks that could help strengthen students’ ability to recognize ethical and professional responsibilities and make informed judgments.

“If we had a simulated social network, we could teach classes where we actually put students in [different professionals’] shoes,” Haugen tells TIME. “We could teach a more quantitative version where students have to pull the data out themselves and analyze it. And we could teach a less quantitative one that’s more about analytical thinking to political science majors where they would still learn about the decision-making process of weighing trade-offs, but they wouldn’t have to pull the data themselves. There’s a real opportunity to bring a wide diversity of people to the table.”

Read More: How Facebook Forced a Reckoning by Shutting Down the Team That Put People Ahead of Profits

TIME spoke with Haugen about the evolution of engineering education, the idea of “design refusal,” and how simulated social networks could lead to a better future.

This interview has been condensed and edited for clarity.

You’ve spoken about developing simulated social networks that would allow educators to train the next generation of social media entrepreneurs and content moderators. What would that accomplish?

[My team] and I are in the middle of founding a nonprofit that’s focused on the idea of how we got here. If we’re going to identify the root cause of the “problem of social media,” it’s not flaws in people and it’s not malicious actors. It’s that not enough people were sitting at the table. It’s that we had these systems that were substantially more opaque than previous technologies of similar power. And that meant that there was never a parallel evolution of an oversight function in society. When we talk about things like cars, we’ve always been able to take cars apart. It took a long time to prove that lead gasoline was a problem, but we could do that without the involvement of the oil companies, for example. In the case of social media, because all we can see is our own feeds, when people see problems, they have no way of knowing how representative those are. And the platforms actively took advantage of the fact that they had that opacity and it shielded them.

So we’re focused on this idea of how do you bring, say, a million meaningfully informed people to the table. And they don’t all have to have the same skills. In any other similarly powerful industry, you have an ecosystem of accountability that grows up around that industry. So you have litigators who understand what a cut corner looks like and who hold people accountable. You have investors who understand what long-term success looks like. You have legislative aides who understand what’s feasible. You have informed citizens, like Mothers Against Drunk Driving, who keep safety and the public good at the forefront of attention. When it comes to social media, all those things are missing. So one of the tools that we think is important to add to the pedagogical wheelhouse is a simulated social network. And that has a couple of different motivations. The first is that if we had a simulated social network, we’d be able to teach certain kinds of classes that are non-existent today. We think about social networks in a very ahistorical kind of way. Like when I talk about the Facebook of 2008, it was a profoundly different product from the Facebook of 2018, which was a profoundly different product from the Facebook of today. And we don’t teach classes on the differences between the different iterations of a product.

Read More: This App Could Fix Your Social Media Addiction

A second motivation is that the way we teach how to think about industrial scale AI systems is fundamentally flawed. We teach data science. We teach the process of being analytical about these systems using problems where we presume there are answers to be found. When we talk about industrial machine learning, we stop having clean answers. When I’m comparing version six and version seven of this industrial machine learning system, it’s to say, is it better? The thing is, to answer that question you have to add, for whom is it better? There’s going to be 20-30 stakeholders and some of those stakeholders are going to benefit and some are going to pay costs and you’re still going to have to decide, do you [produce the product]? And that’s going to happen over and over and over again every single change.

Right now, we don’t teach people to think that way until they show up at Facebook or Google or one of the other big industrial machine learning places. If we had a simulated social network, we could teach classes where we actually put students in those shoes.

Can you give an example of how one of these lessons might play out in the classroom?

So let’s imagine coming to class and you’re asked a question like, should you have to click on a link before you reshare it? It sounds like a really obvious win. Across the board, experiments at Twitter, Facebook, wherever have shown that if you say, “Hey, you have to click on a link before you reshare it,” or you at least prompt someone with a link before they reshare it, you spread less misinformation. In the case of Facebook, if you require people to click on a link before they reshare it, it’s 10-15% less misinformation. But Twitter went ahead and did it. And Facebook didn’t. So there must be something more there that we’re missing. In either case, one of the trade-offs is you will have less content circulating on your system, people will spend less time on your system and you won’t make as much money from ads.

So if you end up doing anything that causes a little bit more friction on reshares you’re going to see substantially less content being circulated. So imagine a class where we showed up and you played the new user team and I played the ads team and someone else played the non-English-speaking user team and someone else played the children team and we all got to look at the data and say I’m an advocate for my team and I’m going to say ship or don’t ship and we’re gonna have to negotiate together. What’s crazy is, people don’t tell college students this, but if you’re a data scientist, at least 40% of your job, maybe 50% of your job, is going to be communication and negotiations. So with those kinds of classes, they’re participatory. And in the end, you get to develop those thought patterns.

Read More: Inside Facebook’s African Sweatshop

In fall 2019, a student-led public interest technology project team at Olin College, your alma mater, pioneered the term “design refusal.” Do you view this concept of deciding not to undertake projects or build technologies that may cause harm to the public as a growing force in engineering education?

The thing to keep in mind when we talk about “engineering education” is it’s really diverse. There are programs that are running 20 years behind even the median. So to categorize engineering education is really hard. But if you look across some of the more progressive programs, at the Ivys, for example, they’re starting to do more on integrating ethics education into everyday lessons and teaching about different needs of different groups and considering all these things. So on the leading edge of people who are asking, “how should we educate engineering leaders in a world where we know there are consequences to technology,” there’s definitely way more conversation now than there was 10 years ago.

Design refusal is almost intrinsically about trade-offs. It’s about the idea that you as an individual might sacrifice for the greater good. And not all programs have the same level of acknowledgement of the fact that engineers exist in society; they do not exist separately from society. So that’s difficult to contextualize.

What kind of reckoning is possible in the tech industry if these kinds of shifts in thinking continue to gain steam?

One thing I’m always telling students is we’re not asking them to be destitute. We’re asking them to be 10% less lucrative, or to expose themselves to a little bit more ambiguity on the path to being more lucrative. You can do lots and lots of wonderful things with technology and make lots and lots of money without stepping on landmines. But you have to do it intentionally. And I think that idea appeals a lot to Gen Z, because they’ve lived with much worse consequences of technology than, say, millennials did. When I was in college, there weren’t really students that were agonizing over the fact that they had gotten engineering degrees. Students weren’t saying things like, “I just got an engineering degree and I don’t know if I can use it.” Things like design refusal help people feel less powerless. When you categorize things as binary, as either good or bad, in some ways, you strip power from individuals. It’s much more constructive to come in and say, “Hey, let’s talk about how you can remain a moral agent now that you’ve been given more power.” That’s a really positive thing. Even at the level of individual happiness, it empowers people.

We are entering a new era where we have to, as a civilization, think about what our relationship is with technology. One of the top things I talk about is the idea that every year, a larger share of our economy is going to be run by opaque systems. As we move to having more of the important bits be on server farms, be on chips that are opaque, be in blackbox AI systems, the only people who are going to understand how those systems work are the individuals inside the companies. And it may be that part of the governance institutions that we have to develop as a civilization to live with those new technologies is actually educating individuals who work at those companies on what their obligations are to society.

More Must-Reads From TIME


Write to Megan McCluskey at [email protected].



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *