Published: Sun, June 10, 2018
Finance | By Loren Pratt

The 4 types of AI Google says it won't develop

The 4 types of AI Google says it won't develop

He also announced seven principles to guide the work going forward.

After pressure from its employees, Google officially announced its AI technology will not be used in weapons.

Technologies whose goal violates principles widely accepted by worldwide laws and human rights. "Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology".

"How AI is developed and used will have a significant impact on society for many years to come", the Google boss writes. As leaders in this field, we feel deeply responsible for doing it right, "Pinchai said, according to the French Agency and Reuters".

The Electronic Frontier Foundation, which had led opposition to Google's Project Maven contract with the Pentagon, called the news "a big win for ethical AI principles".

In addition to announcing these principles, the company also announced four applications it will not pursue: those that cause or are likely to cause harm, weapons, technologies that gather information for surveillance that violate internationally accepted norms, and technologies whose goal contradicts accepted principles of global law and human rights.

More than 4,000 Google employees signed a petition protesting Google's contract, and some staffers resigned over it.

Google will continue some work for the military, Pichai said. Peter Highnam, the deputy director of the Defense Advanced Research Projects Agency, the Pentagon agency that did not handle Project Maven but is credited with helping invent the Internet, said there are "hundreds if not thousands of schools and companies that bid aggressively" on DARPA's research programs in technologies such as AI.

For instance, the company's artificial intelligence will be built and tested for safety; they'll also be designed with privacy in mind, an apparent nod to the controversy surrounding Google Duplex, an upcoming feature in the company's voice assistant that can potentially trick people into thinking it's human.

While he did mention that AI will not be used in weapons and surveillance, Pichai clarified that Google will continue to work with governments and military in other critical areas including training, cyber security, and military recruitment among other things.

But the potential of AI systems to pinpoint drone strikes better than military specialists or identify dissidents from mass collection of online communications has sparked concerns among academic ethicists and Google employees.

He said Google would strive to make high-quality and accurate information readily available using AI, while "continuing to respect cultural, social, and legal norms in the countries where it operates".

Like this: