NewsScienceWorld

Pentagon has decided how to use artificial intelligence in war

UNITED STATES (OBSERVATORY NEWS) — The U.S. Department of Defense has adopted ethical principles for the use of artificial intelligence (AI) in military systems. According to the Pentagon, the military was guided by the US Constitution, the tenth chapter of the US Code, the laws and customs of war, as well as international treaties and “established norms and values.”

The new principles urge people to “exercise due diligence and caution” when deploying and using AI systems, and decisions made by automated systems should be “traceable” and “manageable,” which means that “there must be a way to disable or deactivate” them if the algorithm crashes, said Air Force Lt. Gen. Jack Shanahan, director of the Pentagon’s Joint Center for Artificial Intelligence.

Key ethical standards for artificial intelligence also included principles such as responsibility (a reasonable approach to the development and use of AI), impartiality (to minimize the bias of AI), understandability (documentation should be transparent and unified).

The Pentagon also made the Joint Center for Artificial Intelligence responsible for implementing the principles of using artificial intelligence systems. In total, the US Department of Defense spent 15 months developing principles for using artificial intelligence systems. Some experts noted the vagueness of the adopted formulations and the lack of a clear set of rules that the military should and should not do when developing and using AI systems.

The Pentagon’s desire to speed up artificial intelligence has pushed tech companies to fight for a $ 10 billion cloud computing contract known as JEDI (Joint Enterprise Defense Infrastructure). Microsoft won the contract in October, but Amazon sued the Pentagon, claiming that President Donald Trump’s antipathy with Amazon and its CEO Jeff Bezos reduced the company’s chances of winning the tender.

The current 2012 military directive requires people to control automated weapons, but not for the wider use of AI. The new US principles in the application of artificial intelligence are designed to direct both combat and non-combat areas, from intelligence gathering and surveillance to predicting the problems of aircraft or ship maintenance.

In developing the principles, the recommendations made last year by the Defense Innovation Council, a group led by Google’s former CEO Eric Schmidt, were taken into account. While the Pentagon admitted that AI “raises new ethical ambiguities and risks,” the new principles do not meet the more powerful restrictions that arms control advocates favor.

In 2018, during the development of AI systems, the Pentagon faced resistance from Google employees, who forced the technology company to abandon its participation in the Project Maven military project, which uses algorithms to interpret aerial photographs from conflict zones. Since then, other companies have filled an empty niche.

Most of the AI ​​projects in the Pentagon are developed by the Defense Advanced Research Projects Agency (DARPA), which is under the control. The employees of this unit are engaged in the improvement and implementation of artificial intelligence systems in various weapons. DARPA works to empower machines to communicate and think, just like people do.

In addition, US authorities are supporting the development of AI systems that allow robots to quickly identify online videos, images and audio recordings with tracked and potentially dangerous content.

It is highly likely that the military will want to enlist the support of high-tech companies for more efficient and speedy development of suitable AI systems. However, the tech giants themselves are not happy with this prospect. In July 2018, about 160 companies, as well as more than 2.4 thousand scientists and entrepreneurs, signed an agreement during the International Conference on Artificial Intelligence (IJCAI) in Stockholm, which implies the rejection of the development of weapons with artificial intelligence.

Among the signatories were DeepMind founder Demis Hassabis, entrepreneur Ilon Musk, Skype programmer and founder Zhaan Tallinn. The initiators of the agreement urge the government to agree on rules, laws and regulations that will forever prohibit the development of killer robots based on artificial intelligence technologies. Until such measures have been taken, the parties to the agreement agree to “not participate and not support the development, production, trade or use of deadly autonomous weapons”.

The appearance of a weapon capable of independently choosing a target and engaging in battle with it poses a threat to the moral and pragmatic principles of mankind, and the right to take a person’s life should never be transferred to a machine, the document says. The pragmatic threat is that “a lethal automatic weapon, capable of independently selecting and targeting an object without human intervention, dangerously destabilizes the situation for each country and individual.” Entrepreneurs and scientists have also called on the United Nations (UN) to enact laws restricting the development of combat robots.

Professor of Physics at the Massachusetts Institute of Technology Max Tegmark, who also signed the document, said the agreement marks the transition of leaders in the field of artificial intelligence “from words to deeds.” According to him, a weapon equipped with algorithms is just as disgusting as a biological one, so it should be strictly regulated.

Nonetheless, US Department of Defense funding is the foundation of a modern high-tech economy. But these are dual-use technologies. Companies such as Amazon, Apple, Facebook, Google, Microsoft and PayPal, indirectly, and sometimes directly, are associated with the US military intelligence complex.

For example, according to the Open the Government report, most of the US government is so tightly tied to Amazon infrastructure that the tech giant is opening a branch near Washington. Amazon’s services include cloud contracts, machine learning, and biometric data systems.

Amazon’s military services also include face and face-tracking software for the police and the FBI, as well as racial-gender identification tools for the Department of Homeland Security. Amazon Web Services subsidiaries earn 10% of government revenue. Departments include the State Department, NASA, the Food and Drug Administration, and the Centers for Disease Control and Prevention.

But Amazon is just the tip of the iceberg. According to the director of the Plymouth Institute for the Study of World Problems (PIPR), Dr. TJ Coles, in the mid-90s. future Google founders Larry Page and Sergey Brin used grants from Pentagon-related funds and other government funds to develop search robots and page ranking applications. Around the same time, the CIA and the National Security Agency funded the Massive Data Digital Systems (MDDS) program.

Brin received funding from the MDDS program. According to Professor Bhavani Turaizingham, who worked on the project, “the intelligence community … essentially provided Brin with seed funding that was complemented by many other sources, including the private sector.” Part of the patented Google PageRank system was developed as part of the MDDS program. Two entrepreneurs, Andreas Bechtolsheim (who created Sun Microsystems) and David Cheriton, both of whom previously received Pentagon money, were early investors at Google.

Online:

This article is written and prepared by our foreign editors writing for OBSERVATORY NEWS from different countries around the world – material edited and published by OBSERVATORY staff in our newsroom.

Our Standards, Terms of Use: Standard Terms And Conditions.

Contact us: [email protected]

Stay connected with Observatory and Observatory Newsroom, also with our online services and never lost the breaking news stories happening around the world.