Moderating (mis)information
Authors: Meyer, Jacob; Mukherjee, Prithvijit; Rentschler, Lucas
Source: Public Choice, DOI: 10.1007/s11127-022-01041-w, Feb. 2023
Type of Publication: Article
Abstract: This paper uses a laboratory experiment to investigate the efficacy of different content moderation policies designed to combat misinformation on social media. These policies vary the way posts are monitored and the consequence imposed when misinformation is detected. We consider three monitoring protocols: (1) individuals can fact check information shared by other group members for a cost; (2) the social media platform randomly fact checks each post with a fixed probability; (3) the combination of individual and platform fact checking. We consider two consequences: (1) fact-checked posts are flagged, such that the results of the fact check are available to all who view the post; (2) fact-checked posts are flagged, and subjects found to have posted misinformation are automatically fact checked for two subsequent rounds, which we call persistent scrutiny. We compare our data to that of Pascarella et al. (Social media, (mis)information, and voting decisions. Working paper, 2022), which studies an identical environment without content moderation. We find that allowing individuals to fact check improves group decision making and welfare. Platform checking alone does not improve group decisions relative to the baseline with no moderation. It can improve welfare, but only in the case of persistent scrutiny. There are marginal improvements when the two protocols are combined. We also find that flagging is sufficient to curb the negative effects of misinformation. Adding persistent scrutiny does not improve the quality of decision-making; it leads to less engagement on the social media platform as fewer group members share posts.
Prithvijit Mukherjee is a Visiting Assistant Professor of Economics.