New Delhi:
In an important departure from its earlier commitments not to use artificial intelligence in the field of arms or monitoring, Google has updated its moral guidelines on the same.
The basic 2018 AI principles of the company clearly prohibited AI applications in four regions: weapons, monitoring, technologies that can cause overall damage, and use international laws and human rights.
Now in one blog postDemis Hasabis, Head of AI in Google and Senior Vice President of Technology and Society, James Manyika explained the change. He pointed to the growing presence of AI and the need to work with governments of companies in democratic countries and national security.
Mr. Hasabis and Sri manyika wrote, “There is a global competition for AI leadership within a fast complex geo -political landscape.” “We believe that democracy should lead to AI development, directed by main values ​​such as freedom, equality and honor for human rights.”
Update theory focuses on human inspection and response to ensure that AI follows international law and human rights standards. Google also promises to test the AI ​​system to reduce any unexpected harmful effects.
This change is a major change from Google’s earlier situation, which attracted attention in 2018 when the company faced internal opposition to its Pentagon contract. Known as Project MavenContract using Google’s AI to analyze drone footage. Thousands of employees signed an open letter, urging Google not to be involved in military projects, saying, “We believe that Google should not be in war business.” As a result, Google chose not to renew the contract.
Since Openai launched Chatgpt in 2022, AI is rapidly advanced, but the rules have fought to maintain speed. This change has inspired Google to reduce its self-looked sanctions. James Manyika and Demis Hasabis said that the AI ​​Framework of Democratic nations has helped shape Google’s understanding about the risks and abilities of AI.
(Tagstotransite) Google (T) Google AI (T) Policy Change