Eric Schmidt awards $10 million to AI researchers

Investing in the future of AI is nothing new for Eric Schmidt, who has supported dozens of things like stable AI, turning point AI and Missral AI over the past few years A startup company. But with a new $10 million enterprise that aims to enhance security challenges to new technologies, the former Google (GOOGL) CEO is taking a different strategy.
The funding will launch the AI Security Science Program at Schmidt Sciences, a nonprofit founded by Schmidt and his wife Wendy to speed up scientific breakthroughs. Michael Belinsky, the program’s head, said the plan not only emphasizes the risks of AI, but prioritizes the foundation of science. “That's what we want to do – academic research to figure out why some things are not safe on the system,” he told Observer.
More than twenty researchers have been selected to receive grants of up to $500,000 from the program, which will provide grants for computing support and AI model access to participants. At the same time, subsequent funds will be related to the latest developments in the rapidly developing industry. “We don’t want to solve the problem of GPT-2,” Belinsky mentioned the OpenAI model released in 2019. We want to solve the problem of the current system people are currently using,”
The initial grantees included a well-known researcher like Yoshua Bengio, a machine learning expert who praised his contribution to the field as one of the “godfathers of AI.” Bengio's project will focus on risk mitigation technologies for AI systems construction. Meanwhile, other recipients, such as Zico Kolter, member of Openai board of directors, Carnegie Mellon University, head of machine learning division, will explore like adversarial transfers (such as adversarial transfers), which happens when attacks developed for AI models, are developed. This will be effectively applied to other AI models. .
Another beneficiary of Schmidt Science’s new program is Daniel Kang, an assistant professor at the University of Illinois Urbana-Champaign, who will use the organization’s grant to study whether AI agents can conduct cybersecurity attacks. The AI’s ability in this field has implications in addition to malicious use of bad actors, the researchers say. “If AI can perform cyberattacks autonomously, you can also imagine that this is the first step in which AI might escape control over the lab and be able to replicate itself on the wider Internet,” Kang told Observer.
Is AI security behind?
With Silicon Valley’s craze for artificial intelligence, some people fear security has gone backwards. Earlier this month, the Global AI Summit in Paris stood out from its title as technology CEOs and global leaders gathered to praise the technology’s economic potential. “I'm worried that competitive pressure will leave safety,” Belinski said.
Schmidt Science’s new program hopes to address some barriers to slow down AI’s security research community, such as the lack of quality and safety benchmarks, adequate charitable and government funding, and academic access to Frontier AI models. To bridge the gap between academia and the industry, researchers like Kang hope that leading AI companies will combine breakthroughs in security research as they continue to develop the technology’s broad capabilities.
“I fully understand the need for speed in this fascinating field,” Kang said. But for manufacturers of Frontier models, open communication and accurate reporting should be minimal. “I really hope that the main labs take their responsibilities very seriously and use some of the work that has come out in my labs and other labs to accurately test their models and transparently report what the actual test says.”