Report of the Special Rapporteur to the General Assembly on AI and its impact on freedom of opinion and expression

The Special Rapporteur’s 2018 report to the United Nations General Assembly is now available online.

Algorithms and Artificial Intelligence (AI) applications are now a critical part of the information environment – they are found in every corner of the internet, on digital devices and in technical systems, in search engines, social media platforms, messaging applications, and public information mechanisms.

In this report, the Special Rapporteur examines the impact AI on the information environment, and proposes a human rights framework for the design and use of technologies comprising AI by states and private actors.  

The report’s key recommendations include:

States:

  • When procuring or deploying artificial intelligence systems or applications, States should ensure that public sector bodies act consistently with human rights principles. This includes, inter alia, conducting public consultations and undertaking human rights impact assessments or public agency algorithmic impact assessments prior to the procurement or deployment of artificial intelligence systems. Particular attention should be given to the disparate impact of such technologies on racial and religious minorities, political opposition and activists. Government deployment of artificial intelligence systems should be subject to regular audits by external, independent experts.
  • States should ensure that human rights are central to private sector design, deployment and implementation of artificial intelligence systems. This includes updating and applying existing regulation, particularly data protection regulation, to the artificial intelligence domain, pursuing regulatory or co-regulatory schemes designed to require businesses to undertake impact assessments and audits of artificial intelligence technologies and ensuring effective external accountability mechanisms.
  • Where applicable, sectoral regulation of particular artificial intelligence applications may be necessary and effective for the protection of human rights. To the extent that such restrictions introduce or facilitate interferences with freedom of expression, States should ensure that they are necessary and proportionate to accomplish a legitimate objective in accordance with article 19 (3) of the Covenant. Artificial intelligence-related regulation should also be developed through extensive public consultation involving engagement with civil society, human rights groups and representatives of marginalized or underrepresented end users.
  • States should create a policy and legislative environment conducive to a diverse, pluralistic information environment. This includes taking measures to ensure a competitive field in the artificial intelligence domain. Such measures may include the regulation of technology monopolies to prevent the concentration of artificial intelligence expertise and power in the hands of a few dominant companies, regulation designed to increase interoperability of services and technologies, and the adoption of policies supporting network neutrality and device neutrality.

Companies:

  • All efforts to formulate guidelines or codes on ethical implications of artificial intelligence technologies should be grounded in human rights principles. All private and public development and deployment of artificial intelligence should provide opportunities for civil society to comment. Companies should reiterate in corporate policies and technical guidance to engineers, developers, data technicians, data scrubbers, programmers and others involved in the artificial intelligence life cycle that human rights responsibilities guide all of their business operations and that ethical principles can assist by facilitating the application of human rights principles to specific situations of artificial intelligence design, deployment and implementation. In particular, the terms of service of platforms should be based on universal human rights principles.
  • Companies should make explicit where and how artificial intelligence technologies and automated techniques are used on their platforms, services and applications. The use of innovative means to signal to individuals when they are subject to an artificial intelligence-driven decision-making process, when artificial intelligence plays a role in displaying or moderating content or when individuals’ personal data may be integrated into a dataset that will be used to inform artificial intelligence systems is critical to giving users the notice necessary to understand and address the impact of artificial intelligence systems on their enjoyment of human rights. Companies should also publish data on content removals, including how often removals are contested and challenges to removals are upheld, as well as data on trends in content display, alongside case studies and education on commercial and political profiling.
  • Companies must prevent and account for discrimination at both the input and output levels of artificial intelligence systems. This involves ensuring that teams designing and deploying artificial intelligence systems reflect diverse and non-discriminatory attitudes and prioritizing the avoidance of bias and discrimination in the choice of datasets and design of the system, including by addressing sampling errors, scrubbing datasets to remove discriminatory data and putting in place measures to compensate for such data. Active monitoring of discriminatory outcomes of artificial intelligence systems is also essential.
  • Human rights impact assessments and public consultations should be carried out during the design and deployment of new artificial intelligence systems, including the deployment of existing systems in new global markets. Public consultations and engagement should occur prior to the finalization or roll-out of a product or service, in order to ensure that they are meaningful, and should encompass engagement with civil society, human rights defenders and representatives of marginalized or underrepresented end users. The results of human rights impact assessments and public consultations should themselves be made public.
  • Companies should make all artificial intelligence code fully auditable and should pursue innovative means for enabling external and independent auditing of artificial intelligence systems, separately from regulatory requirements. The results of artificial intelligence audits should themselves be made public.
  • Individual users must have access to remedies for the adverse human rights impacts of artificial intelligence systems. Companies should put in place systems of human review and remedy to respond to the complaints of all users and appeals levied at artificial intelligence-driven systems in a timely manner. Data on the frequency at which artificial intelligence systems are subject to complaints and requests for remedies, as well as the types and effectiveness of remedies available, should be published regularly.