AI Capable of Understanding Related Concepts from Single Learning

author
3 minutes, 17 seconds Read

Humans have the ability to learn a new concept and then immediately use it to understand the related uses of that concept – once children learn to “drop”, they understand that “two people around the room What does it mean to “jump the bar” or “jump with your hands” above.”

But are machines capable of this type of thinking? In the late 1980s, philosophers and cognitive scientists, Jerry Fodor and Zenon Pylyshyn, stated that artificial neural networks – the engines that drive artificial intelligence and machine learning – are not able to make these connections, known as “constructive generalization”. is referred to as. However, in the decades since, scientists have been developing ways to harness this ability in neural networks and related technologies, but with mixed success, thus keeping the decades-old debate alive.

Researchers from New York University and Spain’s Pompeu Fabra University have now developed a technique-informed of in the journal Nature – which advances the ability of these tools, such as ChatGPT, to make constructive generalizations. This technique, Meta-learning for Compositionality (MLC), outperforms existing methods and is equivalent to, and in some cases superior to, human performance. MLC centers on training the neural networks that drive ChatGPT and related technologies for speech recognition and natural language processing to become better at constructive generalization through practice.

Developers of existing systems, including large language models, have hoped that structural generalization will emerge from standard training methods, or have developed special-purpose architectures to achieve these capabilities. In contrast, MLC shows that explicitly practicing these skills allows these systems to unlock new strengths, the authors note.

“For 35 years, researchers in cognitive science, artificial intelligence, linguistics, and philosophy have been debating whether neural networks can achieve human-like systematic generalization,” says Brendan Lake, assistant professor in NYU’s Department of Data Science and Psychology. Are.” One of the authors of the paper. “We have shown for the first time that a simple neural network can mimic or surpass human systematic generalization in a head-to-head comparison.”

Exploring the possibility of promoting structured learning in neural networks, researchers created MLC, a novel learning process in which the neural network is continuously updated to improve its skill over a series of episodes. In one episode, the MLC receives a new word and is asked to use it creatively – for example, to take the word “jump” and then create new word combinations, such as “twice Jump” or “Jump to the right twice.” Then the MLC receives a new episode that includes a different word, and so on, each time improving the network’s composition skills.

To test the effectiveness of MLC, Lake, co-director of NYU Brain, Mind and Machine InitiativeAnd Marco Baroni, a researcher at the Catalan Institute for Research and Advanced Studies and professor at the Department of Translation and Linguistics of Pompeu Fabra University, conducted a series of experiments with human participants that were similar to those conducted by MLC.

Furthermore, instead of learning the meanings of real words—words that humans would already know—they also had to learn the meanings of nonsense words defined by researchers (for example, “zoop” and “dax”) and learn that How to implement them in different ways. MLCs performed just as well as human participants—and, in some cases, outperformed their human counterparts. MLC and humans also outperformed ChatGPT and GPT-4, which, despite their amazing general capabilities, showed difficulties in this learning task.

“Large language models like ChatGPT still struggle with constructive generalization, although they have improved in recent years,” says Baroni, a member of the Computational Linguistics and Linguistic Theory research group at Pompeu Fabra University. “But we think MLC can further improve the composition skills of large language models.”

/Public release. This content from the original organization/author(s) may be of a periodic nature, and may be edited for clarity, style, and length. Mirage.News does not take institutional positions or sides, and all views, positions and conclusions expressed here are solely those of the author. see full Here,

Source link

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *