WHO recommendations and the information underclass

My concerns about restricting content that contravenes authorities, and implications for content moderation

I.

So YouTube says: “Anything that would go against World Health Organization recommendations would be a violation of our policy” (Technically, “masks don’t work” is not a recommendation, but “don’t wear a mask right now” is—but later on we’ll see this technicality is contradicted by praxis: from Reason: “YouTube said Paul violated its ‘COVID-19 medical misinformation policy,’ which among other things forbids ‘claims that masks do not play a role in preventing the contraction or transmission of COVID-19.’”) (22 April 2020; h/t, apparently, Tucker Carlson). Wikipedia says the CDC first recommended that the general public wear masks on 03 April 2020. Meanwhile, Slate Star Codex on 23 March 2020 (realistically probably before that, and the article links to others who were discussing this before then) had determined that in situations where there’s no mask shortage it would be reasonable to wear masks in crowded public settings.

Anything that would go against World Health Organization recommendations would be a violation of our policy. Cue the WHO on 31 March 2020: “‘There is no specific evidence to suggest that the wearing of masks by the mass population has any potential benefit. In fact, there's some evidence to suggest the opposite in the misuse of wearing a mask properly or fitting it properly,’ Dr. Mike Ryan, executive director of the WHO health emergencies program, said at a media briefing in Geneva, Switzerland, on Monday.”

Now, of course I’m not criticizing the CDC for not having been right from the start... although I am irked that random bloggers got it right before they did. The fact remains that the authorities won’t always be right, about science or in general, and they’ll have to change course and correct themselves. Nevertheless, the pandemic has disabused me of my erstwhile excess of trust in the authorities. And the CDC’s flip-flopping sure has eroded trust in authorities, but for me at least that’s a good thing.

But because the authorities aren’t always right, we can’t treat their word as the word of God.—Anything that would go against World Health Organization recommendations would be a violation of our policy.—Picture YouTube in late March taking down a YouTube version of Slate Star Codex for saying masks probably do actually work. And if the health authorities and the government and by extension anyone who takes down content which, going against World Health Organization recommendations, is thus a violation of our policy are fallible like everyone else and must occasionally backtrack, and if this happens often enough on issues where recommendations to do or not do A affect a large enough number of people... suppressing dissent from the authorities’ recommendations could be harmful.

(The thing about extreme misinformation, like people hypothetically drinking bleach, is that even the most misinformed people, if they don’t hold membership in Kool-Aid cults, are smart enough to know they shouldn’t drink bleach and do other things that we know would be immediately harmful to their own selves. It gets more ambiguous as the harm is more likely done to other people or is more indirect, delayed, and uncertain (e.g., people are much more likely to refuse vaccination).)

II.

So we say content must be restricted or—dare I say it?—censored because we, educated journalists and certain members of the tech industry, know better, and must protect the misinformed uncritical thinkers from their own folly. (And not all of this is wrong.) But we too are wrong often enough that it’s dangerous.

As one’s critical thinking, knowledge, time, resources, and inclination But as one gains more knowledge, one should realize that it is important to obtain correct information, and thus gain the inclination to do so. (let’s put all of these under the umbrella of “capacity”, or C) approach infinity, the risk misinformation poses as a function of C (which we define as r(C)) approaches zero It never quite reaches 0 because of infohazards and because we must ultimately rely on trust in our sensory evidence.:

limC → ∞ r(C) = 0

So if you’re more informed and have more skills and resources and time with which to evaluate claims, you’re less susceptible to risks from claims that turn out to be wrong. And in places where most people can be trusted to evaluate information with some degree of coherence, censorship is less necessary to protect people, and dissent is more tolerated. In the argument which eventually leads to the conclusion that we should not restrict information, if I make an error, it’s in assuming that most YouTube users have high C-values, and are the sort of people who read Covid papers in their spare time anyway. But if most YouTube users have low C, and are susceptible to acting on bad advice, then maybe it is more reasonable to restrict the flow of information among them, to protect them (and others) from themselves.

Thus, as C approaches 0 from the right, we have roughly two choices: to treat everyone the same as we treat the C = ∞ situation, or to restrict information inversely to the value of C. Either way, it’s not good to be one of the people with bad critical thinking skills, because you don’t get a good outcome in either case. If we don’t restrict information or alter its flow (e.g., by changing recommendation algorithms), people start to believe false things that are harmful to them, and suffer the consequences. If we do restrict information, we create a [two-tiered system] and the fate of the underclass is tied tightly to the fate of the authorities—if the only information the underclass can receive is the authorities’ current opinion, then any time the authorities are wrong in the right way a lot of people are going to die. Where is the escape hatch in the case the authorities are wrong? We don’t want this and so one has to increase one’s critical thinking ability somehow, which is a whole other bucket of writhing worms.

But in places where most people can be trusted to evaluate information with some degree of coherence, censorship is less necessary to protect people, and dissent is more tolerated.

III.

YouTube temporarily banned Rand Paul (h/t, unfortunately, Tucker Carlson... again) for talking about cloth and other masks not working. This link is timestamped for the relevant excerpt, but I can’t get the full video to run on my computer right now. Transcript of the excerpt:

Most of the masks you get over the counter do not work; they don't prevent infection. Saying cloth masks work when they don't actually risks lives, as someone may choose to care for a loved one with Covid while only wearing a cloth mask.

Now, earlier I was talking about how masks obviously work, but “cloth masks might not work, and might pose risks” is a fairly reasonable and, moreover, Overton-y opinion that was discussed back in March 2020 on SSC, and one of the studies Paul cites is the same one that was discussed in SSC! Of course, Rand Paul’s tone and style are far less nuanced and accurate than a Slate Star Codex article, and some might consider his motives suspect, but this isn’t the kind of thing I would consider obviously problematic, at least; his claims can easily be discussed and debated. But anything that would go against World Health Organization recommendations would be a violation of our policy. Rand Paul would have been right in March 2020, but he’s wrong now, and thus violates the policy. Scott Alexander would have been wrong in March 2020, but he’s right now, and imagine if YouTube’s policy had been in effect when he or a less charitable person made his claims in late March. (I’m almost inclined to appreciate Tucker Carlson’s remark about the time “before they repainted the slogans on the side of the barn”.)

(An aside: Consider a blanket ban on YouTube videos that contain an insufficient amount of nuance, charity, and/or civility. This goes both for people on the much-maligned side and their foils swearing at people who don’t wear masks. Perhaps it would elevate the discourse, although it would certainly limit free speech, and if people got their channels taken down for thoughtless off-the-cuff missteps it would probably be negative, and would create the same [culture of vigilance that the threat of cancellation encourages].)

IV.

I’m told that John Dewey held that the path to ascertaining truth is through a community of inquirers who evaluate claims and seek the truth.

It’s hard for us to immediately distinguish false things from important true things, so if our goal is to consistently determine the truth, we cannot much restrict the discussion of ideas. Thus, in a real community of inquirers, few truths are off-limits, and dissent is encouraged. But in order for such a community to conduct itself satisfactorily, it must comprise people who are skilled in critical thinking, and not likely to go out and act hastily based on potentially incorrect thoughts. Namely, it’s the high-C tier who will be privy to the truth. (It was suggested to me that perhaps the New York Times’ purported “extremists” among Slate Star Codex commenters are not moderated out of existence because the SSC comments are, or at least are considered by their administrator, to be closer to a community of inquiry.)

With regard to the underclass, they are damned if we do and damned if we don’t; if authorities and platforms do not restrict information, the underclass act on incorrect information, and are vulnerable to harm to themselves and others (and those who aren’t part of the underclass are also vulnerable to harm if they’re in the wrong place at the wrong time)—but if authorities and platforms do restrict information, the underclass become aware of only the authorities’ narrative, and have little choice but to suffer when the authorities are terribly, awfully wrong.

Whether we are to create a low-information underclass depends on whether Google, Facebook, and all the rest deem their users to be at risk of algorithmic radicalization or, rather, part of a community of inquirers.

(Note: to be clear, despite the wording, in this post your humble writer is agnostic as to which of these choices is preferable; seeks to illuminate a problem more than to prescribe a solution; and does not think it is clear that the “community of inquirers” choice is better, or that the other is.)