Âé¶¹ÒùÔº


Google's new principles on AI need to be better at protecting human rights

There are growing about the potential risks of AI – and of technology giants. In the wake of what has been called an or "", states and businesses are to the fact that the design and development of AI have to be ethical, benefit society and protect human rights.

In the last few months, Google has from its own staff against the company's AI work with the US military. The US Department of Defense contracted Google to develop AI for analysing in what is known as "".

A Google spokesperson was reported to have said: "" and "it is incumbent on us to show leadership." She referred to "plans to unveil new ethical principles." These have now been released.

Google's chief executive, Sundar Pichar, that "this area is dynamic and evolving" and said that Google would be willing "to adapt our approach as we learn over time." This is important because, while the principles are a start, and are needed if Google is going to become effective in protecting human rights.

Google's principles on AI

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

Google also commits to not pursuing:

  1. Technologies that cause or are likely to cause overall harm.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance, violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

But there are few specifics on how it will actually do so.

AI applications can cause a wide range of harms

Google's principles recognise AI's risk of bias and its threat to privacy. This is important in light of the findings that Google search algorithms can reproduce and . But the principles fail to acknowledge the wider risks to all human rights and the need for them to be protected. For example, biased algorithms not only result in discrimination but can also affect .

Aside from the search engine, Google's other businesses could also raise human rights issues. Google created the company Jigsaw, which uses AI to curate online content in an attempt to address . But content moderation can also pose threats to the .

Google Brain is using machine learning to predict , and Google Cloud will . Both of these examples raise privacy and data protection concerns. Our colleagues have also whether partnerships such as Google DeepMind and the NHS benefit or undermine states' to put in place a healthcare system that "provides equality of opportunity for people to enjoy the highest attainable level of health."

What should Google do?

Google's overall approach should be based on finding ways for AI to be beneficial to society without violating human rights. Explaining its first principle to be "socially beneficial," that it would only "proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides." But an approach that balances the risks against the benefits is not compatible with human rights. A state or business, such as Google, cannot develop an AI that promises to benefit some people at the expense of the human rights of a few or a particular community. Rather, it has to find a way to ensure that AI does not harm human rights.

So, Google needs to fully consider the effects of AI on human rights throughout its development and deployment. Especially so, because risks can arise even if the technology is not designed for harmful purposes. International human rights standards and norms – including the – cover both the purpose and the effect of actions by businesses, including Google, on human rights. These existing responsibilities need to be much more clearly reflected in Google's principles, particularly on the positive action that Google will take to protect harm to human rights, even if unintentional.

To be responsible over how it develops and deploys AI, Google needs to move beyond the current tentative language about encouraging architectures of privacy and ensuring "appropriate human direction and control" without explaining who decides what is appropriate and on what basis. It needs to into the design of AI and incorporate safeguards such as human rights impact assessments and independent oversight and review processes into the principles.

The principles should also detail how harms to human rights will be remedied and how individuals and groups affected can bring a claim, which is currently absent.

The way forward?

Launching the principles, Google's CEO Sundar Pichar that the way in which AI is developed and used will have "a significant impact on society for many years to come." Google's pioneering role in AI means that the company, according to Sundar, "feel[s] a deep responsibility to get this right."

Although the principles are an important start, they need much more development if we are to be assured that our human rights will be protected. The next step is for Google to embed , safeguards and accountability processes throughout their AI development. That is what is needed to "get this right."

Provided by The Conversation

This article was originally published on . Read the .The Conversation

Citation: Google's new principles on AI need to be better at protecting human rights (2018, June 18) retrieved 9 June 2025 from /news/2018-06-google-principles-ai-human-rights.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Google rules out using artificial intelligence for weapons (Update)

13 shares

Feedback to editors