2 July 2021
Ladies and Gentlemen
It is an honor to participate in my first inter-active dialogue with the Human Rights Council.
Since my appointment last August, I have had the privilege of meeting representatives of many Member States, including four regional groups, as well as digital company officials, civil society leaders and rights holders. I would like to thank you all for your constructive engagement with my mandate.
I welcome this opportunity to discuss with you my first thematic report on the complex issue of disinformation and the challenges it poses to freedom of opinion and expression. I am very grateful for the contributions and consultations – even in the midst of the pandemic - with civil society, States, companies and international organizations. They have greatly enriched the report.
Although there is no universally accepted definition of disinformation, drawing broadly from international practice I have interpreted the concept to mean false or misleading information disseminated intentionally to cause serious social harm, and misinformation to mean the dissemination of false information unknowingly.
Disinformation is not a new phenomenon. What is new is that digital technology has enabled pathways for false or manipulated information to be created, disseminated and amplified at scale by various actors for political, ideological or commercial motives.
Let me highlight the four main findings of the report:
First, disinformation, interacting with political social and economic grievances in the real world, is undermining freedom of expression and democratic institutions, polarizing political debates, fueling public distrust and endangering human rights, public health and sustainable development. It is being used to attack women, minorities, migrant and other marginalized communities, journalists, human rights defenders and political opponents.
The impact on individuals, communities and institutions is real and deeply disturbing. The imperative and urgency to address the problem is clear.
However – and this is my second finding - while disinformation is problematic, so too, are the responses of States.
Some States have resorted to disproportionate measures such as Internet shutdowns or to broad, vaguely defined laws that criminalize, chill or censor online speech, or compel social media platforms to remove content without a judicial process. Some governments have used these laws against journalists, political opponents and human rights defenders.
Not only are such measures incompatible with international human rights law, they do little to combat disinformation. On the contrary, by discouraging the flow of diverse sources of information, they hamper fact-finding, feed rumors, foster fear and undermine trust in public institutions. By compelling social media platforms to police speech, they create a risk that companies will zealously over-remove material and undermine free speech.
Thirdly, company responses to disinformation have been reactive, inadequate and opaque. Algorithms, targeted advertising and data harvesting practices of the largest social media companies appear to be driving users towards “extremist content” and conspiracy theories in ways that feed and amplify disinformation, while disempowering individuals and robbing them of their autonomy to freely develop their own views. While the largest US-based companies have taken some positive action to ban or reduce the impact of what they consider to be false information or deceptive practices, it is simply not enough to make a meaningful difference.
I also have serious concerns about inconsistent content moderation, opaque policies and processes, and inadequate transparency and lack of redress mechanisms of digital platforms.
Furthermore, neither States nor companies have done enough to address online gender disinformation that targets women, particularly women journalists, politicians, human rights defenders and gender advocates in order to drive them out of public life. Women’s right to be free from violence and harassment must be ensured online as well as offline.
Fourthly, attempts to combat disinformation by undermining human rights are shortsighted and counter-productive.
Freedom of expression is not part of the problem, it is the primary means for fighting disinformation. When freedom of expression is protected, civil society, journalists, experts and policy makers are able to present alternative viewpoints and challenge falsehoods and conspiracy theories. To take an example, as we can see in the context of COVID-19, people’s trust in vaccines is not built by censorship but through the free flow of information and open debate.
That leads me to the main conclusion of my report.
Access to diverse and reliable information sources, free, independent and diverse media, digital literacy and smart regulation of social media are the obvious antidote to disinformation. Multi-faceted, multi-stakeholder responses, grounded in international human rights law, are the most effective way of building resilience against disinformation.
Based on that conclusion, the report makes a number of recommendations to States and digital companies.
First, as the primary duty bearer of human rights, States must refrain from sponsoring or spreading disinformation. State-sponsored disinformation endangers human rights as well as people’s trust in public information and State institutions.
Second, freedom of expression is not an absolute right but in restricting it, States are obliged to scrupulously respect international human rights standards, including the three-part test of legality, legitimate objective and necessity and proportionality of restrictions. It is important to note that international human rights law does not permit prohibition or restriction of information simply because it is false.
Criminal law should be used only in very exceptional and most egregious circumstances of incitement to violence, hatred or discrimination. Criminal libel is a relic of the colonial past and should be abolished.
Third, State regulation of social media should avoid content moderation and focus instead on enforcing transparency, due process rights of users and human rights due diligence by companies. That is what I call smart regulation.
Data protection is key to reorienting the digital economy away from disinformation. States can play their part by adopting and implementing strong data protection laws.
The report includes several recommendations for companies. In the interest of time, let me summarize them.
While companies must continue to improve their content moderation, they need to go beyond that and review their business models to ensure that their business operations, data collection and data processing practices are compliant with international human rights standards. In line with UN Guiding Principles they should undertake human rights due diligence and impact assessment of their products and policies.
Companies must also enhance their transparency in a meaningful way and provide adequate remedies to users who are affected by company decisions.
They should introduce appropriate policies, remedies and mechanisms with a gender perspective to ensure that their platforms are safe for women and persons of non-normative gender.
Finally, let me underline that in my consultations, every stakeholder I spoke to – whether governmental, non-governmental, inter-governmental or corporate – recognized the vital need to restore trust in the integrity of information and institutions. That, in my view, requires States to enhance their own transparency and disclose public information, protect the safety of journalists, nurture independent, diverse and pluralistic media, and empower rights holders by investing in their digital inclusion and digital literacy.
I look forward to a frank and constructive dialogue with Member States and hope this report will contribute to the Council’s initiatives to uphold human rights in the digital age.