Google updated its Artificial Intelligence (AI) principles, which is a document highlighting the company’s vision around the technology on Tuesday. Mountain view-based tech veteran mentioned the first four application areas where it would not design or deploy AI. These included arms and monitoring as well as technologies that cause overall loss or violations. However, its new version of its AI principles has removed the entire section, indicating that technical giants can enter these first prohibited areas in the future.
Google updates its AI principles
Company first Published Its AI theory in 2018, a time when technology was not a mainstream phenomenon. Since then, the company has regularly updated the document, but over the years, it has not been converted into areas considered to be very harmful for the manufacture of AI-operated technologies. However, on Tuesday, the section was completely removed from the page.
One Stored web page From last week last week, the Vakeback Machine still shows the section under the title “App will not first”. Under this, Google listed four items. The first were technologies that “likely to cause causes or overall disadvantages,” and the other weapons or similar techniques that directly allow people to injure people.
Additionally, Tech Giants also committed not to use AI for monitoring technologies violating international criteria, and which ignore international laws and human rights. The lapse of these restrictions has caused concern that Google may consider entering these areas.
In a different blog postFor Google Deepmind’s co-founder and CEO Demis Hasabis and the company’s senior vice-president technology and society, James many stated the reason behind the change.
The authorities highlighted the rapid growth in the AI region, increasing competition and rapid growth in “complex geo -political landscape”, as Google said some reasons behind some reasons for updating AI principles.
“We believe that democracy should lead to AI development, directed by main values such as freedom, equality and honor for human rights. And we believe that companies, governments and organizations that share these values should work together to make AI that protects people, promotes global development, and supports national security, ”post Said.