Tech News

Eric Schmidt believes that countries can jointly guarantee AI failure (MAIM)

Former Google CEO Eric Schmidt and Scale AI founder Alexandr Wang are co-authors of a new paper called “Super Intelligent Strategy” that warned the U.S. government to create a Manhattan project for so-called Artificial General Intelligence (AGI) because it can quickly get out of control around the world. The gist of this argument is that as the state has the most powerful AI capabilities on the battlefield, the creation of an opponent will lead to revenge or destruction by the opponent. Instead, the United States should focus on developing methods such as cyberattacks that may disable threats to AI projects.

Schmidt and Wang are important boosters of AI’s potential to drive society through applications such as drug development and workplace efficiency. Meanwhile, the government sees it as the next border of defense, with two industry leaders basically worried that countries will eventually compete in a competition to create weapons with dangerous potential. Similar to the outcome of international agreements in the development of nuclear weapons, Schmidt and Wang believe that states and states should slow down in AI development rather than compete with each other in AI-powered killing machines.

At the same time, however, Schmidt and Wang are building AI products for the defense sector. The former’s white stork is building automated drone technology, while Wang’s scale AI signed a contract with the Ministry of Defense this week to create an AI “agent” that can help military planning and operations. After years of sales of technology that evades technology used in war, Silicon Valley is now lined up on patriotism to collect profitable defense contracts.

All military defense contractors have conflicts of interest to promote dynamic warfare, even if there is no moral reason. The idea is that other countries have their own military industrial complex, so the United States also needs to maintain one. But in the end, innocent people suffer and die, while powerful people play chess.

Palmer Luckey, founder of Anduril, defense technology darling, believes that AI-powered targeted drone strikes are safer than launching nuclear weapons that may have a larger impact zone or plant targetless land mines. And if other countries will continue to manufacture AI weapons, we should have the same capabilities as deterrent. Anduril has been providing Ukraine with drones that can target and attack Russian military equipment.

Anduril recently launched an advertising campaign that showed the basic text “Works on Anduril.com” with the word “dot” written in huge, graffiti-style spray-painted letters, which seemed to see that working for the military industrial complex is the current counterculture.

https://www.youtube.com/watch?v=gxqrci3wff8

Schmidt and Wang believe that in any AI-assisted decision-making, humans should always keep a cycle. But, as recent reports suggest, the Israeli military has relied on wrong AI plans to make fatal decisions. Drones have long been a divided topic, and as critics have said, soldiers become more complacent when they are not directly on the line of fire or have not witnessed the consequences of their actions. Image recognition AI is notorious for making mistakes, and we are soon heading to targets where killer drones fly back and forth inaccurately.

Schmidt and Wang Paper made many assumptions that AI will soon become “super intelligence” and can perform as good as humans in most tasks. This is a big assumption, as the cutting-edge “thinking” model continues to produce significant deception, and the company is flooded with AI-assisted works. These models are rough imitations of humans, who are often unpredictable and strange behaviors.

Schmidt and Wang are selling the world and its vision for solutions. If AI will be all-round and dangerous, the government should go to them to buy products because they are responsible actors. Similarly, Openai’s Sam Altman has been criticized for making lofty claims about the risks of AI, some say it is an attempt to influence Washington’s policies and occupy power. It's like saying, “AI is so powerful that it can destroy the world, but we have a safe version that we're happy to sell to you.”

Schmidt's warning is unlikely to have much impact as President Trump removed the Biden-era norms surrounding AI security and prompted the United States to become a major force in AI. Last November, a congressional committee brought up Schmidt's warning about the Manhattan project, and it's easy to see it get attention as the likes of Sam Altman and Elon Musk gained greater influence in Washington. This article warns that if this continues, countries such as China may retaliate by intentionally degrading models or attacking physical infrastructure. This is not an unheard of threat, as China has entered major U.S. technology companies like Microsoft, and others, including Russia, reportedly use freighters to hit fiber optic cables on the bottom. Of course, we will do the same to them. They are all mutual.

It is not clear how the world stops using these weapons. In this sense, it might be a good thing to undermine the idea of ​​AI programs defending them.

Related Articles

Leave a Reply

× How can I help you?