Google will never use AI for weapon making, surveillance, says CEO Sundar Pichai
08 Jun 2018
Days after internet giant Google announced plans to pull back from the US defence department’s artificial intelligence-related project to track drones, Google CEO Sundar Pichai put out a detailed blog post explaining the company's principles around artificial intelligence
Laying out Google’s principles on the use of artificial intelligence, Pichai said Google’s AI technology will never be used for building weapons or mass surveillance.
Pichai’s blog post post comes even as Google is facing criticism from within its own ranks over participation in the controversial Project Maven with the US Defence Department. Some Google employees have also resigned in protests over the company’s involvements in Project Maven.
“AI is computer programming that learns and adapts,” and that it has profound potential to improve the lives of people. But he also admits that AI cannot solve all problems and its use will raise “equally powerful questions about its use,” Pichai said.
He said, as a leader in the field of AI, Google has a “deep responsibility to get this right.” Google will have a total of seven principles to guide their AI work and research. These principles will not just be concepts, but according to Pichai, “are concrete standards, which will govern our research and product development and will impact our business decisions.”
The company’s AI principles state that the AI will be socially beneficial, it will avoid creating or reinforcing unfair bias, will be tested for safety and will be accountable to people. AI from Google will also have privacy principles built within the design, and uphold “high standards of scientific excellence.”
Finally, Google’s AI will be made available only for uses that are in accord with other six principles of the company. Google will also evaluate when to make these new technologies available on a non-commercial basis.