“This research is the definition of a bombshell,” said Senator Richard Blumenthal, the chair of the U.S. Senate committee that heard the testimony of Frances Haugen on October 5. Haugen is the former Facebook product manager who leaked documents to The Wall Street Journal and the U.S. Congress showing the extent of the social-media company’s knowledge about harm from their products—including mental-health problems among teenagers and toxic political engagement driven by incendiary, divisive content that keeps users on the site. As Haugen’s materials demonstrated, Facebook’s executives knowingly misled the public about their actions, and—despite years of mounting pressure following revelations about Russian trolls and bots flooding in 2016 to drive voters toward Donald Trump—the company’s leadership has continued to ignore warnings about the damage Facebook does in the world. And with almost 3 billion users across their platforms, it’s a lot of damage. Can it be fixed?
Sandeep Vaheesan is the legal director at Open Markets Institute in Washington. For Vaheesan, part of the issue is in Facebook’s unprecedented size, but the corporation’s behavior is ultimately driven by its business model, which relies on gathering as much data as possible about individual users so it can promise advertisers optimal access to potential customers. And it’s hard to fix a company whose core problem is its business model.
Michael Bluhm: Can this problem be solved?
Sandeep Vaheesan: It can, but it’s not going to be easy. Facebook has to be reconstructed from the ground up if we’re to have a socially beneficial version of it.
There are at least two fundamental problems with Facebook. First, it’s extraordinarily large. It has four services under its umbrella—Facebook, Instagram, WhatsApp, and Facebook Messenger—each of which has over 1 billion users. Its reach is global, and some of its services—notably WhatsApp—are even more important in Latin America, Africa, and Asia than they are in the United States.
The second issue is its business model. Facebook is an advertising company. In contrast with private platforms like TV stations or newspapers, Facebook compiles very detailed information about users—what they like, what they want, what they fear, what they buy—and uses that information to build detailed profiles on each one of us.
This business model is surveillance advertising. They surveil each one of us, so they can tell producers of goods and services, We can reach your prospective customers better than anyone else. They want to reach as many users as possible. and that means they prioritize and elevate content that’s likely to attract viewers—and that content is often false, inflammatory material that draws people in.
Bluhm: As to Facebook’s size, are there historical parallels to this global scope and power?
Vaheesan: We’ve had global brands before—Coca-Cola is probably the paradigmatic example. But we haven’t had a communications conduit quite like Facebook before, where people are using its services to reach people.
In virtually every nation, we have a global communications conduit in Facebook that people use to stay in touch with family, friends, to message coworkers. You can think of historically large and powerful multinational corporations, but the communication aspect of a single firm connecting billions of people is quite unprecedented.
Bluhm: You make the point that it’s a company with a global reach. Congress is holding hearings and will almost certainly consider new regulation, but does there need to be international cooperation on Facebook? Different countries might create different regulations; for instance, Europe already has stricter privacy regulation with the General Data Protection Regulation (GDPR).
Vaheesan: You’re seeing a global concern with the surveillance-advertising business model, as well as the size and power of companies like Facebook and Google. Ideally, we would see a coordinated international response, where jurisdictions like the United States, European Union, Japan, South Korea, and India are pursuing similar if not harmonious approaches to addressing the power and practices of Facebook.
Apart from that, major jurisdictions like the United States and European Union can provide global benefits through aggressive legislative reforms and regulatory action. If the United States forces Facebook to end surveillance advertising, maybe spin off WhatsApp and Instagram, one path forward for Facebook is it concludes that it’s simply easier to comply with the high public standards established in the United States and implement them around the world.
You’ve already seen that with GDPR, where Facebook and other companies—in partial measures—comply with GDPR outside of the European Union. Even in the absence of global coordination, you can see significant global benefits from aggressive action by the U.S. and EU, given their size and importance to Facebook.
This business model is surveillance advertising. They surveil each one of us, so they can tell producers of goods and services, We can reach your prospective customers better than anyone else.
Bluhm: Is there any sort of historical parallel with governments worldwide coordinating a response to a multinational corporation’s damaging behavior?
Vaheesan: The recent example is probably Microsoft in the late ‘90s and early 2000s, where the United States, European Union, South Korea, and other states took action against Microsoft’s monopolistic practices. The specifics of the cases weren’t identical, but you saw a shared international concern with Microsoft’s monopoly power over the Windows operating system, as well as some of the practices Microsoft had used to marginalize and exclude competitive threats like Netscape and Sun Microsystems. We have an example from the past 25 years that provides a helpful template.
But Facebook has some analogous features to other network industries—telecommunications, the electric transmission grid—serving as pipes connecting different people in a larger system. Like telecommunications or the electric transmission grid, Facebook does benefit from greater scale: The fact that billions of people are on it makes it an extraordinarily attractive network for users. People want to be on the network where their family and friends are.
A basic question that members of Congress and regulators have to ask themselves and the public is, What does a socially benign version of Facebook look like? One model is Facebook serves as a “dumb pipe” that connects different people. Advertising is not quite as central. We take it for granted when we pick up the phone that we’re not going to hear ads when we talk with our family and friends.
We have that system because of a series of legislative and regulatory choices, starting in the early 20th century, to ensure that the telephone system would function as a neutral conduit, one in which companies make money through subscription fees rather than advertising revenue. Maybe the telephone system is one template to reconstruct Facebook in a socially beneficial way.
Bluhm: Do legislators and regulators already have some vision of that future version of Facebook?
Vaheesan: There are several competing visions. They’re not necessarily stated out loud by members of Congress and regulators.
Facebook is a nominally free service, so one challenge will be where legal reform forces Facebook to charge subscription fees instead of relying on advertising revenue. Facebook will say, The federal government’s trying to take away your free Facebook and force us to charge subscription fees. Transitioning from the current system—almost exclusively dependent on surveillance advertising—to something like the phone network, where users pay $10, $20 a month, would be a political challenge. But it would be worthwhile, given the social, economic, and political harms from the existing business model.
Another radical break from the status quo would be reform that prohibits surveillance advertising, this pervasive tracking of users to serve up supposedly accurate advertising and other content. It would allow for contextual advertising, where Facebook is allowed to serve up ads—just not based on surveilling us online and offline.
A basic question that members of Congress and regulators have to ask themselves and the public is, What does a socially benign version of Facebook look like?
Imagine a group dedicated to basketball on Facebook. Under a contextual advertising model, Facebook would be allowed to serve ads related to the NBA, or to shoes and apparel, to people who enter that group on Facebook. That’s much more similar to the advertising model that we associate with TV and newspapers—sporting goods companies might choose to advertise on ESPN. Facebook would continue to make money on advertising but wouldn’t be allowed to surveil us as a way of serving up targeted ads.
Bluhm: During the congressional hearings with the whistleblower, it seemed as though Congress is finally ready to act on Facebook. President Joe Biden has appointed regulatory officials—Lina Khan at the Federal Trade Commission (FTC) and Jonathan Kanter at the Justice Department (DoJ)—who have long advocated for regulating Facebook and Big Tech. Under these circumstances, how likely is it that Facebook is going to make a serious effort at reform to stave off government intervention?
Vaheesan: There’s effectively zero probability that they will fundamentally remake their business model. Their current surveillance-advertising model is simply too profitable. They’ve done it for too long to abruptly abandon it because of a whistleblower or even a series of whistleblowers.
You’re more likely to see a fairly aggressive PR campaign by Facebook to say that they’re a socially responsible actor. You might even see certain modest steps; maybe you’ll see more activity around misinformation related to the COVID vaccines.
Real change is going to happen because lawmakers and regulators force them to change. They sense the pressure. It’s abundantly clear. But their current business model is almost exclusively built on surveillance advertising. More than 95 percent of their revenues come from selling ads. It is fundamental to who they are as a company. Real change is going to come from Congress and agencies like the Federal Trade Commission.
Bluhm: What would a ban on targeted advertising do?
Vaheesan: A ban on surveillance advertising would prohibit the tracking of users online, as well as offline through assorted smart devices and GPS, to build personal profiles to target ads based on a person’s desires, wants, and fears. It would tell companies like Facebook and Google that they cannot surveil users as a method of precisely serving up ads anymore.
Bluhm: How would a ban on targeted advertising help with the problems that these recent scandals reveal? For example, one problem that the whistleblower mentioned was political polarization. But aren’t users going to post partisan content, regardless of whether there’s targeted advertising on Facebook?
Vaheesan: That’s right. I don’t think the solution is for Facebook to engage in content moderation—certainly not content moderation of its own design. It should be done by publicly accountable entities like Congress, not private monopolists like Facebook.
The question with Facebook isn’t that it allows people to post what they believe, regardless of its truth or merit; it’s that the Facebook business model leads to the dissemination of specific kinds of content. Because they are looking to reach as many eyeballs as possible, they value provocative, incendiary, conspiratorial content—that’s likely to attract eyeballs—over content that’s more sober, level-headed, and balanced. It’s less about what Facebook allows to post versus what the Facebook business model disseminates.
There’s effectively zero probability that they will fundamentally remake their business model. Their current surveillance-advertising model is simply too profitable. They’ve done it for too long to abruptly abandon it because of a whistleblower or even a series of whistleblowers.
Whether it’s conspiracies about the Rohingya in Burma or theories connecting the COVID vaccines with assorted ailments, these things aren’t true. But surveillance advertising means that the Facebook algorithm amplifies these types of content over other types of content that might not be as attractive. What gets disseminated is often the most sensationalist, provocative material, and that’s the core problem with the surveillance-advertising model.
If you took that model away and forced Facebook to rely more on subscription fees or contextual advertising, this amplification of sensationalist and incendiary content would be diminished significantly.
Bluhm: You’ve also advocated spinning off Instagram and WhatsApp from Facebook, to promote competition and deal with Facebook’s enormous size. But how would greater competition solve problems of misinformation, polarization, or body-image angst for teenagers?
Vaheesan: That’s a really good question. The size of Facebook is a problem in and of itself, as we saw on October 6, when Facebook, Instagram, and WhatsApp all went down simultaneously, because they’re all owned and operated by Facebook. A significant technical failure can affect all three, which is not something you’d have if the companies were under independent ownership.
Forcing Facebook to spin off Instagram and WhatsApp would not solve the problems you mentioned. The key question is not whether to dial competition up or down but to figure out what types of competition we want among businesses.
There are some very insidious forms of competition that the law already restricts or prohibits. It would be a mistake to assume that breaking up Facebook would automatically lead to the type of competition that we desire. One happy story is if we split these companies up, Instagram and Facebook will compete against each other to offer more privacy protection. Instagram will say, Spend more time on Instagram—we collect less of your personal information.
But another form of competition that could happen is these companies go to advertisers and say, We have a more detailed personal profile on our users than the other guy. Why don’t you spend more money on our platform?
Breaking up these companies is necessary but not sufficient. We need to think about the basic business models and the types of competition that these firms are allowed to engage in. That means setting rules on various practices, including surveillance advertising. Without prohibiting or restricting surveillance advertising, there’s no reason to assume that simply introducing more firms into social networking is going to lead to increased privacy protections. That requires wishful thinking and ignores how these companies are making money.
Bluhm: Another major target for reformers is Section 230 of the Communications Decency Act. Section 230 grants immunity to platforms such as Facebook and Google from any lawsuits concerning the content on their sites. In other words, they aren’t liable for what anyone says in a post or an ad on their platforms. How would reform of Section 230 work? Is it possible for Facebook to even monitor the content that billions of users are posting?
Without prohibiting or restricting surveillance advertising, there’s no reason to assume that simply introducing more firms into social networking is going to lead to increased privacy protections. That requires wishful thinking and ignores how these companies are making money.
Vaheesan: Section 230 is an odd fit for Facebook. The original premise of 230 was in the 1990s with early online bulletin boards. Section 230 said bulletin boards shouldn’t be held liable for users posting libelous or other illegal forms of content on their boards. That made sense, but Facebook engages in active curation: It decides what we see every time we log in, so Facebook is quite different from bulletin boards.
Reform of 230 would put Facebook in the position of either having to do more policing of content or force Facebook to behave more like bulletin boards in 1995. Repealing 230 would not necessarily force Facebook to hire millions of people to review and approve content. It would give Facebook the option of remaking its business model and doing much less curation than it does now.
Bluhm: You mentioned Facebook remaking its business model. Earlier, you said there was virtually no chance that they would abandon the surveillance-advertising model. But other corporations and industries in the past have changed when they realized that their model was not sustainable over the long term. Doesn’t Facebook have the same self-interest in finding a long-term model, if this one is causing too many problems?
Vaheesan: I don’t want to dismiss the public benefits of self-regulation in the face of legislation and litigation. Such efforts can make things better, but they’re just not enough, for at least two reasons. The first is, the company retains discretion over how it’s going to conduct itself. Even if Facebook did everything we wanted it to do right now, it might say in two years’ time, This experiment didn’t work for us. Our revenues fell 70 percent. We’re going back to how we did things in 2021. The decision-making remains with Facebook, as opposed to democratically accountable entities.
The second is, when we’re talking about business models, it’s important to realize that Facebook is not the only company engaging in surveillance advertising. There are others—notably Google—and thousands of others that aspire to be like Facebook and Google. Addressing Facebook alone creates the risk that the business model is still available for someone—Snapchat, Twitter, or TikTok—to exploit in the same way Facebook has. That shows the need for going after business models as opposed to simply reforming particular companies.
Bluhm: Other commentators are calling for increased privacy protections. How are privacy regulations different than a ban on targeted advertising?
Vaheesan: They’re intimately connected. Surveillance advertising is built on broad, deep, systemic invasions of privacy. The reason Facebook tracks users extensively is that it wants to provide raw material for its surveillance-advertising machine. If you prohibit surveillance advertising, the motive for Facebook to collect all this information goes away.
It’s important to think of these two things as intimately related, as opposed to surveillance advertising as one problem and invasions of privacy as another problem. The GDPR approach looked at privacy without giving due importance to the question of why these companies are collecting this information.
Instead of looking at the business model, GDPR set up this system where people have to affirmatively grant their consent before certain types of information can be collected about them. That approach has some fundamental deficiencies—it’s very easy for large, sophisticated companies like Facebook and Google to get users’ consent to track their online activities. It just doesn’t address the core question: Why are these companies collecting information at all?
The reason Facebook tracks users extensively is that it wants to provide raw material for its surveillance-advertising machine. If you prohibit surveillance advertising, the motive for Facebook to collect all this information goes away.
Bluhm: One revelation from the whistleblower was that Facebook had internal research showing that Instagram was causing mental-health problems for teenage girls over their body image. What can laws or regulations do to address this? After all, even if the internet didn’t exist, there would still be magazines, TV shows, and movies that promote unrealistic images of female bodies.
Vaheesan: A ban on surveillance advertising would help. It wouldn’t be a panacea for this broader social problem. Addressing surveillance advertising would reduce the need for these companies to serve up addictive and other content that keeps people on these networks for several hours each day.
I don’t think we can solve the broader issues of unhealthy social norms and unrealistic expectations placed on young women through addressing surveillance advertising or even Facebook and Instagram’s outsized social power, but we can mitigate some of the negative social fallout from these platforms.
Bluhm: You talk about the kinds of content that Facebook tries to elevate. Facebook relies on an algorithm to do that, and it instructs coders to construct an algorithm to favor certain content. Could policymakers regulate algorithms, to create punishments of incentives for certain types of algorithms?
Vaheesan: The focus on algorithms is too narrow. Effectively, Facebook’s algorithms are policy and code. The algorithms are designed to advance some objective of Facebook—increasing revenues, increasing profits. The algorithm question can’t be separated from the business-model question. As long as Facebook can make money through surveillance advertising, the company is going to tell its coders, Develop algorithms that are going to attract more eyeballs—more prospective viewers of advertising. That’s how we make our money.
The interest in algorithms is too technocratic for what is really a law and policy question: How can Facebook be allowed to make money?
Bluhm: Another measure you’ve supported is a common-carrier designation, which would give Facebook the legal status of a utility, such as the platforms that deliver electricity or water. How would this solve any of the problems that Facebook presents?
Vaheesan: The historical origin of common-carriage rules was that certain services should be provided to all comers on nondiscriminatory terms. The classic example would be ferries across rivers; they should be required to take all passengers on the same terms. You and I should pay the same charge for crossing a river on a ferry. The rationale being that people didn’t have alternative means of crossing the river and were vulnerable to exploitation by the ferry operator.
Given Facebook’s awesome power and the billions of people that it connects, a similar set of rules make sense: Facebook is open to everyone, doesn’t discriminate between different types of content, has less discretionary power over what to promote and what to marginalize. The principle informing historical common carriage should play an important role in reforming the business model of Facebook. There are important details to figure out—how common carriage would be applied to a platform like Facebook—but the core principle is the right one.
The algorithm question can’t be separated from the business-model question. As long as Facebook can make money through surveillance advertising, the company is going to tell its coders, Develop algorithms that are going to attract more eyeballs—more prospective viewers of advertising. That’s how we make our money.
Bluhm: As you say, there are details to figure out, but how would such a designation change Facebook?
Vaheesan: A common carrier is a service that has to be available to everyone on the same terms. Another historical example would be railroads. Let’s say you and I were wheat farmers. Railroads couldn’t say, We’re going to carry Michael’s wheat but not Sandeep’s wheat. They also had to provide the same rates on transportation—they can’t charge you a higher or lower rate than they charged me. The basic common carrier principle is: available to all on nondiscriminatory terms.
We see it in civil-rights laws that prohibit restaurants, movie theaters, and other public accommodations from discriminating on the basis of race. They are open to all comers, and they have to provide the same terms to everyone. A movie theater cannot charge a minority more to see a movie than they charge a white person. That’s the core principle of common carriage.
In the case of Facebook, that would mean that Facebook cannot discriminate among different classes of users—it couldn’t promote the content of certain users above the content of other users. Common carriage would mean pushing Facebook closer to the dumb-pipe model that I mentioned earlier, where Facebook serves as a passive, neutral conduit, rather than an active participant deciding what every user sees.
Bluhm: The court system presents a significant potential problem for new laws and regulations. Facebook can afford the best lawyers, and there are many judges appointed by Republican presidents who would likely accept arguments that any new government measures are undue and unnecessary interference in the marketplace. Above all, conservatives have a 6-3 advantage on the U.S. Supreme Court, and these judges have a history of pro-corporate decisions. What are the ultimate prospects for these changes you propose, when it seems likely that the Supreme Court would toss out any government directives on Facebook’s business model?
Vaheesan: The courts are a real obstacle to administrative action and legislation by Congress. No one should be under any illusion about where the courts stand. They are hostile to progressive action, whether undertaken by the Federal Trade Commission or Congress. It’s important to be clear-eyed about that.
But given the existing authorities of agencies like the FTC, it’s absolutely worth trying. I certainly don’t expect the FTC or any administrative agency cracking down on corporate power to have a perfect success rate. It’s just not realistic. Even if we had a much more sympathetic judiciary, the agencies are going to lose some of these fights. But I think they will win a significant number, given existing interpretations of their powers.
The courts are political institutions that also face constraints. If they simply strike down everything that Congress and the Biden administration do with respect to monopolies, they are inviting limits on their power. We’ve seen a movement to pack the Supreme Court. We’ve seen academic debates about stripping the courts of jurisdiction over certain questions.
The courts are not sympathetic here. They’re very much of a libertarian bent, but it would be a mistake to assume that they’ll simply strike down everything as a matter of course. They will strike down some things—and we already saw the FTC’s original complaint against Facebook dismissed by a judge appointed by President Obama. There are sure to be losses, but the agencies have to use the powers they have right now and test the courts. For too long, the fear of losing in court has instilled a passivity and dormancy among the DOJ and the FTC, and that’s certain to change in the coming months and years.