Speech at the University of Geneva by UN High Commissioner for Human Rights Michelle Bachelet
14 November 2018
I’m delighted to be part of this wonderful week of events, of seeking answers to some of the questions that will define our times.
The program of this Human Rights Week lays out a number of topics that the entire human rights community will need to grapple with, as a new digital landscape comes into sharper focus around us.
Can freedom of expression, information, thought and belief survive in an era when 360 degree surveillance by corporations and States is possible? Do we need new tools to ensure that machine-driven processes support human equality and dignity?
Does digital technology offer us fresh hope for the realization of human rights -- or is it Game Over?
I’m not going to pretend there are easy answers.
I’m sure everyone here is aware of the immense benefits that our digital era delivers in just about every area of life.
Human rights work is no exception. Social media and tools such as encrypted communications help to connect and grow movements of human rights defenders. Human rights officers can gather information from social media sources, in addition to enhancing or supplementing human rights investigations by using satellite imagery and encrypted communications to ensure better monitoring, investigation and analysis.
A wide range of applications have been developed to assist investigators to verify that the information they gather is genuine and accurate. Many of these tools help to preserve, store and transmit key material to secured data stores, for possible use in investigating or prosecuting the perpetrators of human rights violations. Other digital tools help investigators identify patterns within their data that can be matched with other, open-source information sets.
This kind of work has helped to find out what happened to missing persons; has helped identify the victims in mass killings or in mass graves; and has provided information about patterns of torture or other violations in specific locations, attributable to specific military units or even to specific individuals.
Thus, for example, in Kosovo1 and in Bosnia and Herzegovina, the International Commission on Missing Persons has stored forensic details from human remains and matched them to family members of missing persons to help identify the missing and reconstruct relevant crime scenes and time-lines.
Regarding Syria, several processes are currently underway to bridge the gap between human rights fact-finding and eventual criminal justice proceedings, by gathering and cross-checking data for disappearances and other violations.
The results will be valuable to families of the missing; in some cases to the victims themselves; and for the future work of prosecutors, truth commissions or other types of accountability mechanisms.
My Office works with groups like the Human Rights Investigation Lab at the University of California, Berkeley, which helps analyse vast amounts of open-source material on Myanmar and Syria. The Lab has also initiated a process to draw up guidelines for open-source investigations, with a view to strengthening the quality of evidence gathered online, so that it can be useful, not only for human rights investigations, but also for criminal prosecutions – including at the International Criminal Court.
Digital tools can also help us with early warning. Spikes of hate speech and other online indicators of rising tensions can constitute a significant alert to impending violence. By monitoring these phenomena, and acting quickly, we can hope to prevent violence.
And the contributions of digital systems to our work don’t stop there. New data streams have been used to track and interrupt human trafficking and exploitation, and have pointed to elements suggestive of modern slavery in business supply chains.
Again, human rights workers are not just using digital tools to detect violations: we’re also employing that knowledge to prevent further violations.
So to this extent – and in many more ways – digital tools are our friends and allies in upholding people’s rights.
But it is also more and more evident that there is a dark aspect to the digital landscape.
The Internet is increasingly a space of threat for human rights defenders. People are increasingly attacked or abused by private actors, purely because of their activities online in support of human rights. This is particularly the case of women, who suffer disproportionately from abusive trolling campaigns, which also expose them to physical attacks in the real world. In a study by the Inter-Parliamentary Union in 45 European countries, 47 percent of women Members of Parliament -- Members of Parliament! -- said they had been targeted on social media with threats of death, rape, or violence.
Governments in every region are also using digital surveillance tools to track down and target human rights defenders and people perceived as critics – including lawyers, journalists, activists on land rights or the environment, and people who support equality for members of the LGBTI community.
In many cases this use of digital technologies for massive and wide-ranging surveillance is of a scale and nature that clearly contravenes the right to privacy, as well as other rights. As the Special Rapporteur on Freedom of Expression has regularly reported, digital technologies, including malware and spyware, also “offer Governments unprecedented capacity to interfere with the rights of freedom of opinion and expression”.
A range of surveillance, online monitoring, and data collection measures -- such as browsing history; purchase history; search history; location data; financial data; health data, and so on -- feed into massive banks of data on every woman, man and child. And, I don't mean everyone perceived to be a critic or an activist, or even every Internet user, but quite simply: everyone.
These data banks may include detailed portraits of our opinions, the nature of our relationships, our social background, medical information, financial situation and so on. They can be sifted, processed and evaluated by digital processes for a whole range of reasons -- without accountability; without adequate supervision of the outcomes; even without us knowing that this is happening, or that the data banks exist.
Already today, from so-called “predictive” police work to criminal sentencing, medicine, finance and key aspects of social protection, machine-driven processes are making forecasts about people’s behaviour and producing decisions with enormous impact on their lives.
How safe are these outcomes? Not very. The systems are only as good as the data put in, and that data itself is often flawed. In many cases, artificial intelligence-based predictions appear arbitrary and unjust, in addition to exacerbating the systemic discrimination embedded in society. These outcomes are not inevitable, but they are already occurring – and their incidence and severity will grow, unless we act.
Is there proper oversight or adequate transparency regarding this use of Big Data? There is not.
The use of artificial intelligence and big data also raise new and essential questions about responsibility. Although the State is always the primary actor in upholding human rights, in this case private companies are responsible for the design and manufacture of tools which collect data and conduct surveillance – as well as the maintenance and ownership of the servers where this information is stored. I will return to this point about the key role of the private sector in a moment, but it is worth noting here.
Even more fundamentally, are the purposes of these systems benign?
In some countries, vast amounts of data are being collected through surveillance, and they are used to determine a personal score employed in granting or denying people’s access to opportunities and services. This could resemble the use of credit histories in other settings. But will it stop there? We've seen globally that data, once collected, has almost a life of its own -- it can be used for a multiplicity of purposes that go far beyond the original or stated purpose.
And this interplay between artificial intelligence and the buildup of data about our personalities and choices goes one step further when it is used, by private or public actors to manipulate our thoughts and change our choices.
This is not science fiction. Whether in the United States Presidential election, the Brexit referendum in the United Kingdom, or the recent elections in Brazil and Kenya, where fake polls and other disinformation were widely shared, we are seeing increasing reports of the use of bots and disinformation campaigns on social media to influence the opinions and choices of individual voters.
Maybe you think this doesn’t apply to us: we are too clever to be affected by a bunch of bots. But I am not so sure. It appears the internet is increasingly becoming an arena for sometimes very sophisticated forces for propaganda – whether by movements of violent extremism, or by private actors or even State authorities for political purposes.
In such a context, can there be any doubt that our freedom to think, to believe, to express ideas, to make our own choices and live as we wish, is under threat?
If our thoughts, our ideas and our relationships can be predicted by digital tools, and even altered by the action of digital programmes, then this poses some very challenging and fundamental questions about our future.
More accurately – these are questions about
You, the largest generation of young people the world has ever seen, are coming of age at a crucial turning point for humanity.
Over the course of the next 12 years, the international community has the capacity to end extreme poverty and hunger, ensure much broader and more inclusive development and set the planet on a course of greater peace, greater justice, and far less harm.
But if we are to fulfil the detailed plan laid out by the 2030 Agenda for Sustainable Development – and create a more peaceful, sustainable, fair and prosperous world for all – we need to uphold human rights across the board, including in the digital landscape.
Is there a map that can guide us as we explore the frontiers of this new space, and evaluate the unforeseen consequences of digital systems in almost every other domain of human endeavour?
We have a compass through uncertainty. Deeply rooted principles can guide us as we evaluate and seek to mitigate and perhaps, govern the impact of these unknowns.
In a global world, we need global solutions. Since the Universal Declaration of Human Rights was adopted 70 years ago, a solid framework of international human rights laws and institutions has been built up to secure the dignity and rights of all people.
International and regional institutions, backed by binding treaties, scrutinise the human rights practice of States and other key actors. Their approach is inherently cross-boundary, and they establish and operate widely accepted processes to guide policies and laws.
The human rights approach brings into focus aspects that may not otherwise be visible, including the disproportionate impact of policies on specific groups, for example in terms of deepening discrimination and inequalities, or in terms of the right to privacy or the right to freedom of expression.
Particularly relevant to today’s digital issues, human rights institutions have built up a tremendous body of experience to help Governments ensure that the private sector acts responsibly in upholding rights – and in providing redress for wrongs caused by their products or services.
The UN Guiding Principles on Business and Human Rights provide an authoritative global standard for addressing the human rights impact of business activity, and they should be applied with determination to the development, deployment and operation of digital systems. They make it clear that Governments must take appropriate steps to prevent, investigate, punish and redress human rights abuses caused by private actor.
Excellent examples of guidance have already been developed for specific sectors, such as the Global Network Initiative Principles and Guidelines, the Telecommunications Industry Dialogue, and the European Union’s Information and Communication Technologies Sector Guidance. These will need to be further refined, and new guidance tools will be needed for other fields – from the health sector to finance, manufacturers of robots, autonomous cars and other artificial intelligence sectors.
The Remedy and Accountability Project developed by our Office is a key tool that makes the Guiding principles more actionable.
The Rabat Plan of Action – which considers the distinction between freedom of expression and incitement to hatred – also contains threshold tests and recommendations, which are extremely relevant to social media and many other aspects of the digital universe.
I believe this precision, expertise, trans-national scope and solid legal grounding are key elements we will need to count on as we walk more deeply into the digital landscape.
In other words, one of the great tasks for the human rights community in the next few years will be to ensure the continued application of human rights in the way in which States operate in the digital age, and in the way in which they regulate the activities of companies in the digital space.
The law is a clear and precise instrument. And when we face uncertainty and potential threats, clarity, based on universally accepted principles, is what is needed.
The framework of international human rights law and institutions define duties and responsibilities – and can contribute strong, meaningful guidance to complement what may be subjective ethical considerations.
It also provides a well-developed architecture of processes for convening, deliberating, drawing up and even enforcing norms.
So we need to work together – human rights lawyers, computer scientists and engineers, representatives of businesses and governmental and inter-governmental bodies – to develop human rights impact assessment methodologies, and other systems for analysis and guidance, which can address the specific requirements of digital systems.
Today, there’s an enormous gap between these communities. We need to open up our communication and make it clear on all sides that human rights law is deeply relevant to the digital world – while the evolution of the digital world is profoundly important to human rights.
Above all, the duty to protect human rights need to be an explicit priority for all stakeholders – States, developers, scientists, investors, business and civil society.
And it needs to be an explicit priority for you. We need to be able to count on the innovative and connective power of young people.
We can make this urgent and evident need for more principled norms in the digital landscape an opportunity to empower young people to contribute to hammering out real solutions.
All too often, young people are excluded from decision-making: they’re not even invited to the table. The topics that we’re discussing here clearly need your voices and your help.
So I urge you to stand up for human rights, and get involved in advocating responsible, human rights based solutions to the challenges of this new era.
And now I look forward to hearing your voices and your ideas.
1/ As per UN Security Council resolution 1244