A group of researchers at Harvard University aims to unravel the mystery of AI learning process

AI has been considered a “black box” technology for years, and its internal operations remain mysterious in its creators, as the system should “learn” itself. But a group of Harvard researchers are looking to understand exactly how AI is learning. exist Japanese IT Services Conference held in San Francisco last week NTT group Launched Artificial Intelligence Physics (PAI) Grouplocated at the Center for Brain Science at Harvard University. Just as physics reveals laws that control motion and energy, the group aims to identify the fundamental principles that drive AI learning and reasoning.
“To truly evaluate and solve the Black-Box paradox of AI, we need to understand it on a psychological and architectural level – how to see, decide and why,” Hidenori Tanaka, team leader at PAI, told Obterver at last week’s event. Tanaka is also an AI researcher at the Harvard Brain Science Center.
Currently, the capabilities of AI are measured by a method called benchmarking that often involves testing models against a set of standardized tasks or questions (such as answering scientific questions, identifying images, or playing games). Tanaka believes that this method is limited. “We need to go beyond benchmarking. It's an insulting judgment on AI models based on pure computing power and how they solve some tricky problems. It completely misses what cognitive depth models might exist,” he said.
More importantly, understanding how AI models work better can help humans improve it. “Revealing the root causes of the initial reasoning behavior of AI models can play a key role in minimizing bias and hallucinations in the upcoming system,” Kazu Gomi, president and CEO of NTT Research, told Obterver at an event last week.
To achieve this, PAI is building a “model experiment system” or controlled digital experiment environment through which developers can observe how the learning and reasoning curve of AI models develop over time. By developing numerous multimodal datasets of images and text composed of physical, chemistry, biology, mathematics, and language topics, the team is exploring how AI interprets previously invisible concepts and preserves information. The PAI program works with AI developers around the world to improve these datasets by gathering insights from real-world experiments.
“The purpose is to be a structured playground for AI systems. Unlike Internet data, these data are intentionally produced data sets, and each data point entry provides a unique predefined function,” Tanaka explained. “Just like drugs treat specific neurons in human physical condition, we are looking at how information triggers responses in AI models at the neural or node level.”
Tanaka has a Ph.D. in the theoretical physics of Harvard. His research focuses on learning The brain's calculation principle and align it with the laws of physics.
PAI is a derivative of NTT Research’s Physics and Informatics Laboratory, founded in 2019, and Tanaka previously led a group research research on machine learning and sustainable data center solutions. The group has published more than 150 papers in AI research. Pai Previously About Neural network pruning algorithm Quote more than 750 times.
PAI's core team includes multiple Harvard researchers. It also collaborates with Harvard neuroscientist Venkatesh Murthy, Professor Princeton and former NTT research scientist Gautam Reddy and Surya Ganguli of Stanford. Ganguli is a senior fellow at the Institute of Artificial Intelligence (HAI) at Stanford University, led by AI pioneer Fei-Fei Li and co-authored several papers with Tanaka.