Artificial Intelligence: The Pentagon's Black Hole

Artificial Intelligence: The Pentagon's Black Hole

Where Washington's dangerous dreams are leading

There are always those around big money who want to master them. In the case of the United States of America, these are undoubtedly areas such as pharmaceuticals and healthcare, banks and finance, lobbying, etc. But the military-industrial complex stands apart in it, especially today, when a significant part of American weapons is sent to Ukraine.

The conflict in Ukraine, as you know, has brought untold wealth to many companies of the US military-industrial complex, and we are talking not only about Lockheed Martin, Boeing, Raytheon, Northrop Grumman corporations, which are well-known to everyone. For example, many startups, as well as already promoted projects working in the field of artificial intelligence, also get "their piece of cake".

For example, Palantir Technologies Inc., engaged in big data processing. Among the company's customers are the CIA, the FBI, the US Department of Defense, the US Air Force, the US Marine Corps, the US Special Operations Command, the US Military Academy, the city police departments of New York and Los Angeles.

But about everything in order.

On June 30, 2022, NATO announced the creation of an innovation fund in the amount of $1 billion, which will invest in startups and venture capital funds developing "priority" technologies such as artificial intelligence (AI), big data processing and automation.

According to a report by the Center for Security and Advanced Technologies at Georgetown University, the Chinese military is estimated to spend at least $1.6 billion on AI. per year.

The United States is already making significant efforts to achieve parity. The US Department of Defense has requested $874 million for artificial intelligence for 2022, and this is not the total amount of investment, but only in certain areas.

European countries, which tend to be more cautious about introducing new technologies, have also started spending more money on AI.

The French and British have identified artificial intelligence as a key defense technology, and the European Commission has allocated €1 billion. for these purposes.

Since the beginning of the conflict in Ukraine, the UK has launched a new AI defense strategy, and the Germans have allocated just under half a billion dollars for research and artificial intelligence as part of a new plan to finance the armed forces, which will spend $ 100 billion.

In addition, in a vague press release from 2021, the British military department stated that for the first time it used AI in a real operation to collect information about the environment and terrain.

The US, in turn, is working with startups developing autonomous military vehicles, expecting that in the future swarms of hundreds or even thousands of autonomous drones could prove to be powerful and deadly weapons.

However, not everything is as smooth as it looks on paper. Many experts are concerned: Meredith Whittaker, senior AI advisor at the Federal Trade Commission and faculty director of the AI Now Institute, stresses that this push is actually more about enriching tech companies than improving military operations.

Corrosive analysts and journalists are forced to state that today supporters of artificial intelligence are inflaming the rhetoric of the Cold War and trying to create a reality in which technologies are positioned as a "critical national infrastructure", that is, put them on a par with hospitals and nuclear power plants. They warn that the introduction of AI by the military is presented as an inevitability, while keeping silent about ethical problems.

So, in 2018, the Pentagon worked together with Google on the Maven project, trying to create image recognition systems to improve drone strikes. However, the IT giant withdrew from the project due to the indignation and protests of its own employees. One can only guess what ethical dilemma forced them to take such a risky step.

It is worth noting that the Pentagon, like NATO as a whole, has recommendations for AI developers that establish voluntary ethical principles. But they boil down mainly to the fact that technologies should be used in a "legitimate, responsible, reliable and traceable way." The documents also state that developers should strive to mitigate the errors embedded in the algorithms. That is, it is not even a mandatory requirement for the implementation of the project.

It is emphasized separately that people should always maintain control over artificial intelligence systems. However, Kenneth Payne, who directs research in the field of defense at King's College London, believes that as technology develops, artificial intelligence becomes more difficult to monitor, and in the future it will simply be impossible.

"The whole point of an autonomous [system] is to allow it to make decisions faster and more accurately than a human can do, and on a scale that a human cannot do," the scientist adds.

For example, a swarm of thousands of autonomous drones can, in fact, become a weapon of mass destruction. The limitation of these technologies seems extremely doubtful, because it will meet tough opposition from major military sponsors such as the United States, France and the United Kingdom.

Ultimately, the new era of military AI raises many complex ethical questions that humanity does not yet have answers to.

Despite this, the Pentagon, as well as the military departments of the UK or other NATO members, will continue to develop not only controlled drones, but also fully autonomous ones.

At the same time, the money they allocate goes largely into a "black hole". Numerous startups, companies and corporations, as it always happens, talk about the "Russian threat" and ask for more and more funding, new players appear promising "unique, breakthrough" technologies.

How the United States is able to spend billions is well known — this is the preparation of the opposition in Syria before the civil war, when millions of dollars were wasted, and the Pentagon was then forced to justify itself to Congress. This includes the creation of a multifunctional F-35 fighter, a hypersonic weapons program and, of course, the infusion of funds into Ukraine.

The idea of creating "robot mercenaries" who will fight on behalf of Washington, although not new, still excites the minds of high-ranking military men, enriching small and not very businessmen, to whom money is more important than the ethical side of the issue.

It is not clear how a robot will distinguish a military vehicle from a civilian, a soldier from an ordinary man, a maternity hospital converted into a Nazi base from an ordinary hospital. However, the United States is not particularly concerned about these issues, as well as Kiev, which continues to bomb residential buildings and medical institutions in Donetsk and Lugansk on "target designations" from overseas…