In April 2017, it was reported that Google is partnering with Pentagon to aid them in analyzing the video data collected by the drones at the time of surveillance. This is in view of the fact that the drone footage is extensively large, detailed manual analysis is mind-numbingly tedious. Consequently, Pentagon’s Maven project recruited several tech firms for using their image recognition technology, that could be used to identify objects of interest. As one of the leading AI firms, Google signed on too. Google, by coupling its AI to drone footage could identify as many as 38 classes of objects of military interest.

Maven was Google’s first military contract and was seen by executives as a golden opportunity to more lucrative military and intelligence projects. Internal emails reviewed by Gizmodo showed that the initial contract was worth at least $15 million and that the budget for the project was expected to grow as high as $250 million.

However, the three year long contract didn’t go well with several of Google’s employees. It was argued that the machine learning designed for surveillance purpose in due time might be applied to lethal or other military tasks. Additional employees had several disagreements over the ethics related to AI technology. Even though higher authorities at Google supported and assured its employees that the technology will be used for non-offensive purposes only, it could not stop the war within the management inside. Almost 4,000 employees signed an internal petition threatening to quit their jobs in protest.

After all this internal conflict, in early June 2018 Project Maven’s rebels apparently won out. Cloud CEO Diane Greene during its weekly meeting on Google Cloud’s told employees that the Maven contract would not be renewed after it expires in 2019. According to Gizmodo: “Google would not choose to pursue Maven today because the backlash has been terrible for the company, Greene said, adding that the decision was made at a time when Google was more aggressively pursuing military work. The company plans to unveil new ethical principles about its use of AI next week”.

Days afterwards, CEO Sundar Pichai published a list of ethical principles which laid out objectives for AI applications to guide Google’s work in the future. Google’s decision to outline its ethical stance on AI development comes after years of worry over the looming threat posed by automated systems, development of artificial general intelligence or AI with human-level intelligence. According to Pichai, Google won’t use its AI technologies for:

  1. Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Google made it clear that while they are not developing AI for use in weapons, they will continue their work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veteran’s healthcare, and search and rescue. These collaborations are important to the company and Google seeks to actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

The company says that its main focuses for AI research are to be “socially beneficial.” This means avoiding unfair bias; remaining accountable to humans; be built and tested for safety; upholding “high standards of scientific excellence,” and incorporating privacy safeguards.

Sweet! Thanks for the reply my friend

This site uses Akismet to reduce spam. Learn how your comment data is processed.