Visit techpolicyinstitute.org for more!
March 24, 2022

Stan Besen & Phil Verveer on a Coasian Approach to Section 230 Reform

Stan Besen & Phil Verveer on a Coasian Approach to Section 230 Reform

Stan is a Senior Consultant with Charles River Associates. He's a nationally recognized expert in the economics of intellectual property rights, telecommunications policy, and telecommunications and computer standards. Stan has taught at Rice, Columbia, and the Georgetown University Law Center. And in government, he was a Brookings Economic Policy Fellow for the Office of Telecommunications Policy and the Executive Office of the President and Co-Director of the Network Inquiry Special Staff at the Federal Communications Commission. Prior to joining CRA, he was a Senior Economist at the Rand Corporation.

Phil is a Senior Research Fellow at the Mossavar-Rahmani Center for Business and Government at the Harvard Kennedy School. Phil has practiced communications and antitrust law in the government and private law firms for nearly five decades. In the Obama administration, he served as Senior Counselor to the FCC chairman. And before that, as Ambassador and US Coordinator for International Communications and Information Policy. Earlier in his career, he was an antitrust prosecutor at the DOJ, where he was lead counsel for the US v. AT&T case, and also at the FTC, and he has been chief of three FCC bureaus.

 

 

Liked the episode? Well, there's plenty more where that came from! Visit techpolicyinstitute.org to explore all of our latest research! 

Transcript

Tom Lenard:

Hello, and welcome back to the Technology Policy Institute's podcast, Two Think Minimum. It's Friday, March 11th, 2022. And I'm Tom Lenard, President Emeritus and Senior Fellow at TPI. And I'm joined by Scott Wallsten, TPI’s President and Senior Fellow, and TPIs Senior Fellow Sarah Oh Lam. Today, we're delighted to have as our guests, Stan Besen and Phil Verveer.

Stan is a Senior Consultant with Charles River Associates. He's a nationally recognized expert in the economics of intellectual property rights, telecommunications policy, and telecommunications and computer standards. Stan has taught at Rice, Columbia, and the Georgetown University Law Center. And in government, he was a Brookings Economic Policy Fellow for the Office of Telecommunications Policy and the Executive Office of the President and Co-Director of the Network Inquiry Special Staff at the Federal Communications Commission. Prior to joining CRA, he was a Senior Economist at the Rand Corporation.

Phil is a Senior Research Fellow at the Mossavar-Rahmani Center for Business and Government at the Harvard Kennedy School. Phil has practiced communications and antitrust law in the government and private law firms for nearly five decades. In the Obama administration, he served as Senior Counselor to the FCC chairman. And before that, as Ambassador and US Coordinator for International Communications and Information Policy. Earlier in his career, he was an antitrust prosecutor at the DOJ, where he was lead counsel for the US v. AT&T case, and also at the FTC. And he has been chief of three FCC bureaus.

Welcome, Stan and Phil. It's a pleasure to have you here. You have now waded into the content moderation debate, one of the more intractable problems of the digital revolution, with a new paper titled, “Section 230 and the Problem of Social Cost.”

As most of our listeners probably know, Section 230 of the Communications Decency Act exempts internet platforms from liability for harms that may result from materials posted on their platforms. This exemption has become very controversial due to well-known concerns about harmful content on the internet. However, there's no agreement on whether, and if so, how, Section 230 should be modified. Your paper applies what economists would call a “Coasian analysis” to the problem of harmful content on internet platforms. So, why is the Coase theorem framework a useful way to think about this issue? And maybe you can start by briefly explaining what the Coase theorem.

Stan Besen:

I think that one of the most fun parts about this is taking a 60-year-old paper and applying it to a contemporary policy issue. It turns out that Coase has a lot of useful things to say about Section 230. We modify his analysis, somewhat, to take into account some specific features of the Section 230 debate, but we find Coase's way of thinking about this to be very useful. For those who don't know the Coase theorem story, Coase starts off with a very simple example where there is a single farmer and a single rancher on adjacent properties. The rancher has cattle that, if unfenced, may wander into the farmer's field and destroy some crops, and the question is what sort of legal liability regime should be put in place to deal with the potential for harm caused by straying cattle?

What Coase shows, the Coase theorem, if you like, says, If there are no transactions costs, it makes no difference whether or not the farmer is liable for crops that are damaged. Let's assume that it's efficient to build a fence, that is the harm that would be caused to the farmer's crop by straying cattle is greater than the cost of building a fence. The Coase theorem says that irrespective of the legal liability, the fence will be built if it's efficient to do so. If the rancher is liable, he will build a fence to avoid the liability to the farmer. If the rancher is not liable, the farmer will pay the rancher to build a fence. And that's the simplest sort of Coase theorem. Most people think of that as the Coase theorem, but the Coase paper is actually far more nuanced than that.

It actually starts there, but in fact, it considers a range of other factors. So, let me just talk just briefly about two ways in which Coase modifies the simple analysis and two of the things that Phil and I have added to it for the Section 230 analysis. First of all, Coase says that when there are transactions costs, which there almost certainly are in the real world, the initial assignment of property rights makes a difference. In the no transactions cost world, it doesn't make a difference. So, let me just read just a short quotation from Coase to make the point. He says, “One arrangement of rights may bring about a greater value of production than the other, but unless this is the arrangement of rights established by the legal system, the costs of reaching the same result by altering and combining results through the market may be so great that the optimal arrangement of rights, and the greater value production that it would bring about, may never be achieved.”

So, first Coase introduces transactions costs and says, you have to take them into account. The second thing that Coase talks about that's a variation from the initial story is when there are a large number of potential victims, instead of having a single farmer.  If there are lots of farmers, each of the farmers could experience a small amount of harm but collectively, a large amount.  In such cases, Coase says direct regulation may be appropriate, and let me again read a short quote. “And there is no reason why an occasion government administrative regulation should not lead to an improvement in economic efficiencies. This would seem to be particularly likely when a large number of people are involved, and which therefore the cost of handling the problem through the market or the firm might be high”.

So, those are the two modifications of the, if you like, the Coase theorem. What Phil and I have done is to consider two additional features that have to be taken into account if you apply this analysis to Section 230. One of them, of course, is there are two potential sources of liability. It’s both the internet platform and the source of the information. Under Section 230, the source is potentially liable, but the platform is not. So, there's the question of whether or not that is an efficient initial assignment of rights. The economic equivalent of the fence in the Coase theorem story is content moderation. It's the activities undertaken by an internet platform or information source to limit the amount of harmful content that is disseminated. At least one person has used the colorful terminology, content immoderation, arguing that, in fact, the incentives of some participants in the internet, either platforms or information sources, or both, may be such that they have an incentive to disseminate harmful information because that, in fact, attracts people to their site. So, they're balancing not only the potential liability and the costs of content moderation, but also the potential benefits from immoderation. Phil and I take those other factors into account to further modify the Coase analysis.

Tom Lenard:

So, what does the Coase theorem imply about whether platforms should have liability?

Stan Besen:

That is an empirical question, but Phil and I conclude, for a variety of reasons, that it would be better in many cases, if you want to limit the amount of harmful of information, to make the platform liable, perhaps in addition to, or instead of the information source.

Tom Lenard:

Is that because platforms can mitigate the harms at lower cost or…?

Stan Besen:

Well, there are a variety of reasons. I have a shortlist here. It's often hard to identify the source and bring action against them. It's a transactions cost issue. Information sources may be judgment proof, unlike many of the platforms and so it wouldn't be much good to sue them. And, finally, there may be economies of scale in content moderation, which makes platforms the more suitable party to be liable for the dissemination harmful information. And in fact, despite the fact that they have apparent immunity from liability, there in fact have been a couple of cases, we cite them, in which actions were brought against internet platforms, some successful, some mostly less so. But on balance, we think that's probably likely to be the case, that making platforms liable would be an improvement, a better initial assignment of property rights, although we're very careful to point out that we don't think that's a panacea.

Tom Lenard:

Of course, there's, you know, there's such a great variety of harmful stuff that could be posted on the internet. There's everything from, you know, some of them might be easier to sue for than others. Like let's say copyright infringement, piracy, that's affected by Section 230, all the way to hate speech and fake news, which obviously different people have different views as to what fake news is. I mean, does the same analysis apply to all potential harms?

Stan Besen:

I think I should let Phil take that one.

Phil Verveer:

Well, Tom, I think that the short answer is that, as you suggest, there's a very large variety of potential harms and the question of how best to deal with the particular negative externalities is a fair one. A lot of the proposals that have emanated from our Congress have tried to identify specific kinds of things that they would like to expand the liability exposure of the platforms for. And so, my guess is we're not likely to have a one size fits all kind of arrangement. There are plainly, if you will, gradations of harm. There are plainly situations where the present arrangement, which relies upon individual initiative to try to curb the harm, that is individuals suing in tort, are not very plausible. There are some other situations where perhaps it could be plausible. We have at least one notorious example of that, perhaps, but it is very, very likely as Stan said, that this is not a one size fits all set of arrangements. And it's also very, very likely that trying to find remedies, if we feel that remedies are necessary, is going to be a continuing activity. It's not something that's going to be resolved in any steady or stable state.

Stan Besen:

And we would expect, and as the cases Phil mentioned, the various proposals for legislation do contemplate various kinds of, what we call carve outs. Things that in fact, a platform would be liable for if it disseminated that information, but might not be liable if it disseminated other information that had not already been designated as harmful.

Scott Wallsten:

So, I have a couple of questions. I mean, one is, is it possible? I can't believe I'm going to ask this, but is it possible for our transactions cost to be too low? I mean, is it, would you make it easy for anybody to sue a platform for anything that they thought harmed them? I mean, you're talking about sort of carve outs and you have to decide what harm is, but it seems like everybody's going to have something that they consider to be a harm, and they have one easy target.

And the second is, you know, how do you relate this to First Amendment issues? I mean, the First Amendment basically says you know, if you think about it as a platform, if you're going to err on the side of taking too much down or leaving too much up, we leave too much up, and you seem to be saying, are you saying the opposite?

Stan Besen:

I'll actually let Phil talk about the First Amendment in a moment. I'll just say one thing, and Coase makes this point. Many of the kinds of harm that we're interested in, the ones that are really harmful, involve a large number of people each affected to a small degree, but collectively the harm is very large. In those cases, we usually worry about the opposite of the problem you describe. We worry that no one will in fact have an incentive to bring legal action. And in fact, that's why people have, particularly the Europeans have, thought about alternatives that would not require direct legal action by somebody who thought they were harmed. There are in fact a couple of the cases that we mention are ones in which an individual or group of individuals are identifiable victims or claim victimhood from harm, and they have in fact brought suit. I think, we think, the bigger problem in many cases is going to be that, in fact, is the free rider problem. Even though the collective harm is very large, no individual is willing to incur the cost to bring legal action against the source. Now, Phil's going to tell you about the First Amendment.

Tom Lenard:

And also maybe, Phil, if you could tie into what Stan was saying. If you could give an example of a type of harm that where the collective harm is large and the individual harm is not worth anybody suing, that does not run afoul of the First Amendment.

Phil Verveer:

Well, yes, now at this point, we've got a lot of questions floating through the air. So, let me see if I can answer one or two of them. First of all, Scott's concern that you might have over-enforcement if you let anybody sue for anything is of course something that we see in the policy realm. You see it in the antitrust world where concerns about treble damage actions can in at least some circumstances perhaps lead to over-enforcement to sort of [inaudible] litigation. One of the proposals, a type of proposal that's floated around in Congress, has been to expand the ability of the government to sue, but not private individuals, to sue on certain kinds of harms. And that presumably would be a way that would mitigate somewhat the anxiety about excessive enforcement, but to permit the Justice Department, the Federal Trade Commission, and State Attorneys General to sue in the event that something rose to some appropriate threshold, perhaps on a parens patriae kind arrangement.

The question is, what kinds of things might be a situation where an individual wouldn't find it useful to sue, but where collectively, you might get an overall set of harms. One might be misinformation about the efficacy of vaccines, for example, or the efficacy of wearing masks in the context of a pandemic. There could be things that run very counter to standard scientific understandings that disseminate, but that in any individual case, causing causality in terms of damage or whatever, would be so daunting that nobody, unless they have an ideological reason for pursuing would sue, but a normal economic expectation be that you wouldn't find likely.

Yet, it could be something that harms a great many people. And in fact, you know, when the historians get done with our dealing with our pandemic, presumably decades or centuries from now, my guess is that part of the story will be that there was misinformation about certain… or the things that we should do to protect ourselves, it probably made it worse for a significant number of individuals. That'd be kind of an easy example.

Tom Lenard:

I mean, obviously I'm not a constitutional lawyer, but if you had something which basically said you can't debate the efficacy of a particular drug, because that's misinformation, doesn't that run afoul of the First Amendment? Even if we all agree right now that vaccines, or most of us agree that vaccines are effective, there may be some new scientific development someplace that raises questions. I mean, you know, some of us are old enough to remember thalidomide, a drug that was approved by the FDA, and then later on was found to cause a lot of harm. If you had something like a prohibition on debating that…

Phil Verveer:

I agree with that. You're right. And this is the First Amendment, a very broad sweep of the First Amendment, in terms of our society. One incidentally not shared by presumably any other country on the face of the Earth, including a lot of them that have similarly strong attachments to freedom of expression. But our particular approach to this is one that is either literally unique or close to it. And you get an argument, a debate that goes on and on and on in a world of constitutional law, about whether or not we should deal with these things, as we more or less do these days, saying that “Well, the right remedy for full speech, bad speech, harmful speech, is more speech.” But you also get something of a tradition, that comes from other Supreme Courts decisions, to the effect that the constitution is not a suicide pact.

So, you get this kind of tension that will always be there when it comes to the issues of freedom of expression. And indeed, the question of freedom of expression, broadly stated, is one of the things that at this point, I think politically prevents much in the way of amendments of Section 230, because there are anxieties that arise with respect to any proposed remediation or amendment, that it will somehow rather be harmful to the ability of people to freely express themselves. So, the answer with respect to the First Amendment of course, is there isn't any very convenient answer in terms of government compulsion. Now there's some other things that one ought to think about in connection with this, which is as noted, others who have very strong attachments to freedom of expression are not limited that way. And you can see developments now in the European Union, and in the United Kingdom, that will in fact end up having a practical effect on the contradictions we're talking about now, that will proceed without that particular inhibition. And that will have some ripple effect here. It's going to, without doubt, have some effect on the complete free flow of information.  Whether or not our society is better off for that or not, is one that again, people will be able to debate for a very, very long time.

Stan Besen:

Can I say a word here? Just as the non-lawyer here, we all know the old example, the First Amendment doesn't protect your right to shout fire in a crowd theater. Okay, and I suppose the closest thing to that on the internet would be somebody using the internet to say, foment a riot or riotous behavior, or that sort of thing. And we don't expect to be able to deal with that by private voluntary action. I'm not worried, in that case, about over suing the source. And that's why we say that regardless of any modification of the liability under Section 230, anything that shifts some liability to platforms would probably have to be accompanied by carve outs and greater specificity about the obligations that platforms have.

Scott Wallsten:

So, still I worry about how one decides what is harmful and what isn't. So, for example, Florida now, Florida has this, “don't say gay” law, right? So, the Neanderthals in their legislature think that people are harmed by that kind of conversation, right? So, in their minds, that kind of discussion is also harmful. Can they then sue Facebook and so on for having discussions about LGBTQ+ issues?

Stan Besen:

I think you're right to be worried. I remember actually, I was in the audience of one of your conferences where I said, I was worried about something. And you said, I had the right to be worried. [Laughs]

And we have the right to be worried here. We don't want to claim that we have the answer to this question in our paper. Our paper, I think, its contribution is a way to think about, to parse the issues, to get the help in drawing the lines. But we are convinced that drawing the lines will not be easy, will take a long time, and will undoubtedly be very controversial. So, you’re right to worry.

Scott Wallsten:

I do a lot of that.

Tom Lenard:

You know, even before one gets into issues of how hard it is to make the carve outs that you're talking about and draw the lines. I mean, there are a lot of people who, or certainly significant number of people, who think that Section 230 and the liability protections was necessary for the development of the internet, basically. What are your views on that?

Stan Besen:

Although the internet seems to have done pretty well, I don't know whether Section 230 was a necessary condition for that success. Certainly, the internet is very different now from what it was when it first started. And so, we've learned some stuff and maybe that could change our view about whether or not Section 230 is desirable.it certainly was the case when Section 230 was adopted, and we say this, that it was designed primarily as a device for encouraging the development of the internet, but these other issues have now arisen. They're in the newspaper almost every day, not just in journals, but the papers, and undoubtedly something is likely to be done about them. I think one of the things that I think is important is to think about the legal liability together with a number of regulatory interventions. I think if you were a platform, and you were subject to liability, potential liability, you would be much happier if your obligations were specified more clearly in the law. And that's why I think things like clear carve outs, not that it's going to be easy to do, clear statements of the content control obligations would be something that if I were a platform I would prefer That is, if I were Facebook, I would want to know what my obligations were, if I were in fact subject to liability. They might actually be happy with that.

Tom Lenard:

No, I think they might be happy with that, but it doesn't really get around the problem that it's pretty difficult to figure out what those should be.

Stan Besen:

Oh, we completely agree with that.

Tom Lenard:

Right. Right. Who would you envision, which would you envision, you know, rulemakings at some agency about this, or the FTC, or some new agency, a new internet agency?

Stan Besen:

Phil has been, I think, among a group of people proposing such an agency. So, we’ll let him talk about it.

Phil Verveer:

Yeah. We've been, as you mentioned, Tom, I've been active at the Kennedy School at Harvard with some others, as we've tried to see if we could figure out whether or not there are social controls that might be appropriate to the big platforms. And what we've come away with, beginning with the competition issues, not these content moderation, negative externality issues, but beginning with the competition issues, we came down the same place as many, many others who've looked at this in Brussels, in London, University of Chicago, and with our project at Harvard, which is, you probably can't do this successfully without a regulatory agency. There are a lot of reasons behind that. One can get into the minutiae with large amounts of time to talk about that. But when you then begin to look at this part of it, the part we're talking about, which is the negative externalities that flow out of the particular business models, that are the open mic business models that we're talking about, the questions become, as you've indicated, as we we've discussed, it becomes extraordinarily difficult.

They become extraordinarily difficult for reasons of trying to define adequately what it is that you're concerned about. They're very difficult because of First Amendment limitations here in the United States, and also, because the nature of the harms almost certainly evolved. When Section 230 was adopted by our Congress, the notion that foreign interference with our elections or with our political process could be one consequence was something that my guess is no one, probably no one imagined.

To take a more homely example, what cyber bullying would be like. You know, how appalling is it that children, teenagers probably especially, are the object of brutal bullying, where it turns out that at least a tort action against a platform is going to be unavailable. Right? And indeed, because of the odd scope of Section 230, if somebody has repeated a, let's say defamatory or harmful posting about an individual, about a teenager, for instance, the person who has done that is a so-called user, and that user is immune.

So that if a maladjusted kid writes something awful about a classmate on a social media platform and then three or four other classmates decide it'd be fun to just repeat it verbatim, there's nothing to be done about those three or four other classmates. Now, these are the kinds of issues, if you think about this, on the one hand, you know, enormously significant geopolitical issues. And on the other hand, really unfortunate things that happen right in the neighborhood to innocent individuals, where, because of the apparent, frankly, inability to deter this kind of activity through the normal judicial process for normal individual actions. In other words, the pure Coasian approach, you've got something that is ongoing, that we haven't been able really to control in any meaningful way. Haven't been able to deter or discourage in any very meaningful way. Now, some smart people addressing some of these things say one way to deal with this would be for Congress to make more things criminal. For Congress, to make more kinds of activities illegal.

At which point there might still be First Amendment debates, but those First Amendment debates will be dampened down a fair amount. And it may be, we want to do something like that, but there's this whole range of problems that when you try to look at this through a Coasian lens, you come away thinking it does turn out, that notwithstanding the fact that Section 230 is in many ways a wonderful example of Coase's influence, his enormous influence over law and policy over the last 40 or 50 years, where it turns out it doesn't, at least in the pure case, it doesn’t work very well. And so, as we've been discussing, remedies are very difficult to articulate in any completely adequate way, but it seems clear we're going to need to do something that looks a lot like some kind of regulation, some kind of ongoing activity. You cannot conceivably believe that expecting Congressional enactments as new issues come along, would be an effective, pragmatic, practical way of contending with these things. We're going to have to empower somebody who can act more quickly than our Congress can to try to deter these things. If in fact, that's what we want to see happen.

Sarah Oh:

Are there analogs in other areas of regulation to this Coasian analysis like pollution, EPA, or have you thought about other areas of regulation that have a similar dynamic?

Stan Besen:

Well, I would say that this isn't the first time I've used the Coase theorem. I was on the other side, the last time, of the argument, this had to do with the copyright liability for cable television, where they had a compulsory license. And I argued in that paper, that Ronald Coase actually published, that in fact, that licensing could have been handled through the market. But as a general matter, as Phil and I have both said here, it's hard in any of these cases. And the language I read from Coase makes this clear, if you have a situation, any situation in which harm is widespread, but no individual party that is harmed has an incentive to bring suit, you have to have regulation. So, you might think of air pollution. We're not going to think we resolve the problem of air pollution by having individuals sue the factory for the harm they individually experience. No one thinks that's a good idea. And so, there are lots of cases involving externalities. And, by the way, we share the skepticism that you guys have expressed.  We do not think that this problem is easy to resolve. We do think, however, this is a useful way to frame the question and to raise the right issues.

Tom Lenard:

Well, one other thing, you don't really address this directly, but you kind of hint at it in the paper, is when you talk about the incentives of the platforms, in some cases to disseminate or magnify the dissemination of harmful information. Now there is a school these days, and maybe it's a growing school that thinks that, you know, that the root of a lot of the evils, and this being one of them, is the advertising supported internet. You know, that the major platforms, or some of the major platforms, you know, are supported by ads. And so, they have the incentive to look for more eyeballs to see the ads. And so, their solution, perhaps to problems like this, as well as others, privacy related problems, is to somehow, I don't know how you exactly do this, but somehow prohibit the advertising supported… that business model basically.

Stan Besen:

Yeah, I think actually that hasn't, that's been a proposal. I forget now who the proposer was, I think a Nobel prize winner, some sort of tax on advertising in order to discourage this…

But certainly, it is the case that having an incentive to attract more viewers, more users, does create economic incentives, and in some cases to put on harmful materials. By the way, those aren't the only ones.  We give some much more prosaic examples of journals that are prepared to publish anything for a price, even if the stuff is wrong, because the author of the journal pays them to do so. There apparently are some internet sites that libel people in the hopes of that the person is libeled will pay them to take the material down. It's not only advertising, but that's certainly part of it. There are different schools of thought about how serious the content immoderation is. Some people think it's incredibly serious. We quote somebody in the paper who thinks it's not a big problem, but even they think that there are instances in which harmful information is disseminated.

And you're right, it is because attracting more people to the site raises the profits of the site. That's as we said, that kind of consideration is not present in the Coase theorem story. The analogy would be if the rancher benefits when his cattle strays, and they eat some of the farmer's crops, and get fatter as a result. In that case, he would consider not only the cost of building the fence. but also the foregone benefits of his cattle grazing.

Phil Verveer:

Some of the proposals to deal with the advertising, and therefore the effort at keeping people engaged, try to approach this by in certain ways, limiting or affecting the algorithms that tend to basically present to people, things that will keep them engaged. And so, if you could demonstrate that the harm was in fact, in some ways, a function of the amplification that the algorithms produce, that that would be a reason to exclude the limitation of liability, or more simply to hold liable a platform. So, that's one of, and there are various flavors of that particular approach, but that's one of the approaches that we've seen surface in Congress as well.

Stan Besen:

Phil reminds me of the fact that we say there are least two kinds of fixes being considered in the legislation and before the Congress. One is carve outs, and the other is greater specificity of the content moderation obligations of a site. Europeans, in fact, have some of this as well, where they in fact are trying to look closely at what sites are doing and to try to specify what good practices are. And that's again, another potential tool that might be considered here.

Phil Verveer:

You know, there's one other thing that probably warrants a brief mention, and that is the original judicial decision with respect to Section 230 arose about a year after it was adopted by the Fourth Circuit in a case called Zeran against AOL. And that decision gave an extremely broad reading of the Section 230 requirements. And it has been followed just about uniformly ever since, including in cases that in many respects have facts that are so appalling, it's hard to believe that the courts involved didn't try to find some way to escape the precedent. In a couple cases they did, but by and large, they haven't. And again, with, with facts that are so, as I say, so appalling, that it's just hard to believe that human beings do these things to one another.

In recent times, Justice Thomas has now on three occasions, indicated that he believes the Supreme Court should take a look at Section 230 because of a view that it has been interpreted unnecessarily broadly or misinterpreted in some way. Now the court, obviously hasn't done that. It hasn't touched this, notwithstanding certain opportunities to do it, in the 25 years that the provision has been around. But in addition to all the Congressional proposals, there is of course, some possibility that one day the Supreme Court will take a look at it, and perhaps somewhat modify the reading of the statute that we've had now, since that original Fourth Circuit decision.

Stan Besen:

Actually, Phil reminds me of something that we do talk about. There were a couple of cases successfully brought against platforms. Why did that happen? In those cases we say that the platforms were insufficiently passive in re-transmitting information provided by others. So, they in fact incurred liability by interacting with the information provided by the sources., I suppose that they, may have learned their lesson by now. But again, if you play a more aggressive role, you may in fact find yourself liable where you otherwise might not have been. And that's why it's really important, I think for platforms and everyone else, to clarify what, in fact, their liabilities and opportunities are.

Tom Lenard:

Well, I thought the Section 230 allows platforms to moderate and take down content, but…

Stan Besen:

They can take it down. In these particular cases, they interacted more than that. They added something extra, which they added and that was considered, and this is not a legal term, insufficiently passive. And those in fact are the small number of cases in which they in fact were found liable. Pure take down. Not a problem, as I understand.

Phil Verveer:

Yeah. In the particular cases, the interactions to most of us would seem to have been so trivial that if you were a legal realist, you'd say the courts involved were trying to escape the possibility of holding the defendants harmless. But there had only been a couple cases like that. And to say the great percentage of the decisions are very much to the contrary in the face of, as I say, just absolutely remarkable factual situations.

Tom Lenard:

Well, this has been really a very interesting discussion. Sarah and Scott, do you have any additional questions you'd like to bring up?

 

Scott Wallsten:

No, this has been really interesting.

Stan Besen:

I think that we can be confident that this issue will be around for a long time.

Tom Lenard:

Yes, that is correct. Well, I want to thank you again. This was really a great discussion, and we appreciate it very much.