COVID-19 demonstrates the potential power of tech companies as a force for good, but also that they have largely devised their own rules in vacuums of both standards and accountability. A new digital deal is both essential and inevitable.
A man uses a Swedish version of the COVID-19 Symptom Tracker app on his smartphone in Stockholm, Sweden, on 29 April 2020. Photo: Getty Images.
Misinformation and Disinformation
The coronavirus pandemic, labelled an infodemic by the World Health Organization, has demonstrated the power of false information, whether created or shared without intention of causing harm (misinformation) or knowingly generated to cause harm (disinformation).
The peddling of false claims online and on television has precipitated arson attacks on 5G mobile phone masts across Europe: over 50 in the UK, at least 16 in the Netherlands, further attacks in Belgium, Cyprus, Italy and Ireland. Mere discussion of a vaccine is stoking the anti-vaccination movement. The scale of disinformation is rife: Facebook alone has placed warning labels on around 50 million pieces of content and as of mid-April, COVID-19 misinformation on Facebook had been viewed an estimated 117 million times.
At a time when two billion people are at home and largely reliant on the internet and social media for news and information, the platforms have stepped up to the responsibility of curating the content they host for false claims. These steps, admittedly imperfect, centre on removal of or placing warning labels on misinformation and the active promotion of reliable information. It is notable how closely tech companies have been working with public health authorities and governments in directing their crisis efforts.
This episode debunks the myth that truth carries the most currency in the marketplace of ideas, such that disinformation need not be controlled. It demonstrates both the formidable power of false information as a weapon in public discourse, and the strength of the tech companies’ armoury against it. The potential of both weapon and armoury cry out for a skeleton framework of standards that reflect the values of human rights and democracy, to prevent their being wielded in ways antithetical to those values.
At the core of the debate on privacy and COVID-19 tracing apps is whether their purpose is only to inform individuals of risks they may face, or additionally to ‘centralize’ data on the spread of COVID-19 so that governments may understand and tackle the extent of exposure in the community. The need for protection of privacy and measures required have been carefully debated.
On the question of purpose, there is a difference of view between Apple and Google and some governments, including the UK and France. The Apple/Google Exposure Notifications System enables public health authorities to develop their own contact tracing apps that will neither identify users, gather location data nor permit use for targeted advertising.
The data remains ‘decentralized’, ie it passes between phones rather than being collated at a central hub. As of 20 May this API had been requested by 22 governments; but is insufficient for those governments which see one purpose of the app as being to collect centralized data. Apple will not currently permit its technology to enable centralized data collection.
This situation turns on its head the relationship between regulator and regulated. Rather than governments setting privacy rules which tech companies must follow, the tech companies are setting privacy limits and giving governments no choice but to accept.
The reason for this inversion may lie in part in the opacity that cloaks privacy online. Even governments are not well placed to understand fully how data is held and protected by online companies, therefore cannot easily establish what rules are and are not needed to protect the right to privacy.
Collaboration should be deepened so that governments understand the working of the companies and can develop fundamental privacy standards, by reference to human rights law, that will endure over time, and command respect of both companies and society. Such collaboration should be accompanied by far greater transparency for individuals and scrutiny possibilities for civil society. Europe’s General Data Protection Regulation was an important step but is already insufficient.
There are two emerging sets of governmental approaches to the role of tech companies in our society: largely Western ones, founded in human rights and democracy; and more authoritarian models, centred on government restrictions on speech. We must ensure that Western models are developed quickly enough to become the world standard and to lead the development of tech companies and their place in society.
Western models need proactively to build a skeleton framework of standards, not passively to allow the market to self-regulate. Western governments can no longer avoid crucial issues of expression and privacy by declining to regulate or ducking contemporary challenges.
From the COVID-19 crisis it is apparent that tech companies can play a key role in protecting public goods, that it is both legitimate and necessary to require them to do so, and that many companies would welcome a normative framework to guide their actions.
As the European Commission prepares the draft Digital Services Act and the British Government the draft Online Harms Bill, they must not shy away from constructing a skeleton framework of standards grounded in human rights and democracy, collaborating closely with tech companies and civil society to glean the best ways of doing so.
This is governments’ responsibility as custodians of the public interest, not a threat against corporate inertia. Most importantly, they should instil those standards before commercial or authoritarian state interests step in to fill the void.
Source: International Law and Governance