Don't Be Evil: DeepMind is a British-American research lab that focused on artificial intelligence and machine learning algorithms. The company eventually became part of Google with a promise not to misuse AI research for military purposes, but it now seems that Mountain View may be going against that pledge.

Around 200 employees of DeepMind signed a letter urging the company to terminate its contracts with military customers. According to a report by Time, the letter was sent to Google's higher-ups earlier this year. However, executives have not addressed the employees' concerns about the potential misuse of AI for harmful or warfare-related applications.

The letter was signed by about five percent of DeepMind's workforce, Time notes, which is a small portion of the team working on Google's most advanced AI projects within the research lab. Despite being a minority, these employees represent a growing concern among Google workers about the military use of intelligent algorithms – an area that should be off-limits according to Google's own AI development principles.

Google's AI principles emphasize optimism about the technology's potential while explicitly stating that the company will not pursue certain applications if the technology could cause, or is likely to cause, harm. The principles forbid the development of weapons or other technologies designed to directly or indirectly cause injury, as well as AI solutions for surveillance applications that violate international treaties.

The letter from DeepMind employees, dated May 16, voices concerns about Google's recent involvement with military organizations. While the letter is not specifically addressing current geopolitics or any particular conflict, it does highlight concerns over reports of Google's business dealings with the Israeli Military Defense through a collaboration known as Project Nimbus.

Under Project Nimbus, Google is reportedly providing cloud computing capabilities and AI services to Israel's military. The letter also references allegations that Tel Aviv is using AI for mass surveillance to select targets in ongoing bombing campaigns in Gaza. The letter states, "Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles."

In recent years, Demis Hassabis and other technology figures, including Stephen Hawking and Steve Wozniak, have expressed concerns about the weaponization of AI. They have advocated for an outright ban on AI weapons. The latest letter from DeepMind employees calls for a thorough internal investigation into the use of Google services by military and weapons manufacturers, as well as a halt to the use of DeepMind-developed technology for military applications.

Three months after the letter was sent, sources familiar with the matter told Time that Google has done nothing in response. Officially, the company has stated that Project Nimbus has no connection to military workloads, weapons, or intelligence services. However, a source told Time that Google's statement is "so specifically unspecific that we are all none the wiser on what it actually means."