Google has created a dangerous U-turn on military AI

Google has created a dangerous U-turn on military AI


Google’s “not evil” era is well and really dead.

After changing that motto in 2018, “correct work”, leadership in the original company Alphabet Inc. has now withdrawn one of the firm’s most important moral trend on the use of its artificial intelligence by the army.

This week, the company removed its pledge not to use AI for arms or monitoring, a promise that was since 2018. Its “responsible AI” principles no longer include the promise, and the company’s AI chief, Damis Hasbis, published A. Blog posts, interpreting the change, prepared it as unavoidable progress rather than any kind of agreement.

“(AI) getting wider as a mobile phone,” Hasabis wrote. It has “developed rapidly.”

Nevertheless, the perception that moral principles should also be “developed” with the market. Yes, we are living in a rapidly complex geo -political scenario, as Hasabis describes it, but leaving a code of morality for war can result in results that rotate out of control.

Bring AI into the battlefield and you can get automatic systems that respond to each other at the speed of the machine, with no time for diplomacy. War can be more deadly, because the struggle increases before humans have time to interfere. And the idea of ​​”automated” automatic contests can force more military leaders to take action, even if the AI ​​systems make a lot of mistakes and may also cause citizens casualties.

Taking automatic decisions is a real problem here. Unlike the previous technology, which has made terrorists more efficient or powerful, the AI ​​system can fundamentally change who (or what) decides to take human life.

It is also disturbing that the name of Hasbis is on the justification of Google’s careful word among all the people. He sang a separate tune back in 2018, when the company established its AI principles, and joined more than 2,400 people in AI, so that the pledge not to work on autonomous weapons could be named.

After less than a decade, that promise is not counted much. William Fitzgerald, a former member of Google’s policy team and co-founder of a policy and communication firm, says Google was under rapid pressure for years to take military contracts.

He recalled former US Deputy Defense Secretary Patrick, while visiting Google’s Cloud Business Headquarters, Sunnywell, California in 2017, while the unit staff manufacture the infrastructure needed to work on top-patched military projects with Pentagon Was doing. Hope was strong for contracts.

Fitzgerald helped stop it. He co-accepted the company’s opposition to project Maven, a deal to develop AI to analyze drone footage with the Defense Department, which would be an automated targeting that Goglers feared. Some 4,000 employees signed a petition, stating, “Google should not be in the war business,” and resigned in about a dozen protests. Google eventually trusted and did not renew the contract.

If you look back, Fitzgerald sees as a blip. “This was an discrepancy in the trajectory of Silicon Valley,” he said.

Since then, for example, Openai has partnered with defense contractor Enduril Industries Inc. and has been pitching its products in the US Army. (Just last year, Openai banned anyone from using its models for “weapon development”.) Anthropic, who bills itself as a security-AI lab, in November 2024 Palantir also participated with Technologies Ink to sell his AI Seva Cloud. Defense contractor.

Google has spent years in struggling to conduct proper inspections for their work. It dissolved a controversial morality board in 2019, then fired two of its two most prominent AI morality directors after a year. The company has so far deviated from its original objectives that cannot see them anymore. Therefore, even its Silicon Valley colleagues, who should never have been left to regulate themselves.

Nevertheless, with any luck, Google’s U-Tur will put more pressure on government leaders next week legally binding Rules for military AI development before race dynamics and political pressure make them more difficult to establish them.

Rules can be simple. It is mandatory to maintain a human to oversee all AI military systems. Ban any completely autonomous weapons that can first select the target without human approval. And make sure that such AI systems can be audited.

A proper policy proposal comes from the Future of Life Institute, a think tank was once funded by Alone Musk and currently run by Massachusetts Institute of Technology Physicist Max Tegmark. It is calling for a tiered system to treat military AI systems like nuclear features, which are calling for vague evidence of their safety margin.

The governments called in Paris should also consider the establishment of an international body to implement those safety standards similar to the inspection of nuclear technology of the International Atomic Energy Agency. They should be able to ban companies (and countries) violating those standards.

Google reversed is a warning. Even the strongest corporate value can uproot under the pressure of an ultra-hit market and a administration, which you simply do not call “no”. The Non-E-Dusahus era of self-regulation is over, but still has a chance to place the binding rules to remove the deepest risks of AI. And the automatic war is definitely one of them.

(Tagstotransite) Google