Beyond performative transparency
lessons learned from the eu code of practice on disinformation
Abstract
The EU Code of Practice on Disinformation has attempted to approach the issue of disinformation through a self-regulatory model, but this has seen limited success. We analyze 1114 self-reported actions from Code signatories (Google, Meta, Microsoft, Mozilla, TikTok, and Twitter) taken from 47 monthly transparency reports addressing COVID-19-related disinformation. While the transparency reports were designed to provide transparent, meaningful reporting, the process of assessing each platform’s disinformation actions was complex due to repetition, vague descriptions, and a lack of quality data. Platform actions were often reported using a promotional tone; some were irrelevant to COVID-19 or disinformation. We argue that how we understand the role that social media platforms play in data collection and the social outcomes that result from these data extraction processes needs to be questioned. Drawing upon data colonialism, we call for transparent access to data based on the idea that what platforms view as the property is based on a “commercially motivated form of extraction” rather than a “naturally occurring form of social knowledge.” European debates about regulating online disinformation must be set against a broader perspective on regulating the digital environment as a public infrastructure. Policymakers can achieve better civic and democratic outcomes by focusing on controlling the digital environment through, for example, robust competition, data portability, and interoperability rules. Such actions can break Big Tech’s dominance while incentivizing better and new services for citizens.
Results
Conclusion
download official Article from publisher website
Access the complete article directly from the publisher’s website to explore in-depth insights, data, and expert analysis