Groundbreaking Study Shows AI Neural Network Capturing Key Aspect of Human Intelligence
ICARO Media Group
In a groundbreaking development, scientists have announced that neural networks can now emulate human thought processes more closely than ever before. This significant shift, revealed in a study published in the journal Nature on October 25, marks a turning point in the long-standing debate within cognitive science about the best representation of the human mind using computer models.
For decades, a group of cognitive scientists argued that neural networks, a form of artificial intelligence (AI), could not accurately model the human mind due to their architectural limitations. However, the latest research demonstrates that with training, neural networks can indeed acquire a critical aspect of human intelligence that was previously thought to be unattainable by this form of AI.
According to Brenden Lake, co-author of the study and an assistant professor of psychology and data science at New York University, "Our work here suggests that this critical aspect of human intelligence... can be acquired through practice using a model that's been dismissed for lacking those abilities."
Neural networks simulate the structure of the human brain to some extent, with interconnected information-processing nodes that function in hierarchical layers. However, they historically failed to behave like the human mind because they lacked the ability to combine known concepts in novel ways, a trait known as "systematic compositionality."
The study focused on training AI models and human volunteers to understand a made-up language comprising words such as "dax" and "wif." These words either corresponded to colored dots or dictated the order in which dots appeared. Participants were required to figure out the underlying grammar rules that determined the dot sequences.
The human participants achieved approximately 80% accuracy in producing the correct dot sequences. When they made errors, they consistently misinterpreted a word as representing a single dot rather than a function that shuffled the entire dot sequence.
After testing several AI models, researchers identified a method called meta-learning for compositionality (MLC), which allowed a neural network to practice applying different sets of rules to newly learned words while receiving feedback on its correctness. The MLC-trained neural network performed at or above the level of humans in these tests, even making similar errors when data on human mistakes was incorporated.
Comparatively, AI models from OpenAI, the organization behind ChatGPT, fell significantly behind both MLC and human performance on the dot sequence test. MLC also excelled in additional tasks involving the interpretation of written instructions and sentence meaning.
Although the MLC model has demonstrated impressive success in computing sentence meanings, it still faces limitations in generalization to new sentence types. However, Paul Smolensky, a professor of cognitive science at Johns Hopkins and senior principal researcher at Microsoft Research, acknowledges the study's progress by stating, "Until this paper, we really haven't succeeded in training a network to be fully compositional. That's where I think their paper moves things forward."
While the AI neural network has taken a significant step toward capturing a critical aspect of human intelligence, there is still room for improvement. "That is the central property that makes us intelligent, so we need to nail that," said Smolensky. "This work takes us in that direction but doesn't nail it" - at least not yet.
The study's findings have immense implications for the field of cognitive science and highlight the potential for further advancements in AI technology. As researchers continue to refine neural network models and improve their ability to replicate human cognition, the boundary between machine and human intelligence becomes increasingly blurred.