Many AIs can only become good at one task, forgetting everything they know if they learn another. A form of artificial sleep could help stop this from happening
Jeremy Hsu in New Scientist: Artificial intelligence can learn and remember how to do multiple tasks by mimicking the way sleep helps us cement what we learned during waking hours.
“There is a huge trend now to bring ideas from neuroscience and biology to improve existing machine learning – and sleep is one of them” says Maxim Bazhenov at the University of California, San Diego.
Many AIs can only master one set of well-defined tasks – they can’t acquire additional knowledge later on without losing everything they had previously learned. “The issue pops up if you want to develop systems which are capable of so-called lifelong learning,” says Pavel Sanda at the Czech Academy of Sciences in the Czech Republic. Lifelong learning is how humans accumulate knowledge to adapt to and solve future challenges.
Bazhenov, Sanda and their colleagues trained a spiking neural network – a connected grid of artificial neurons resembling the human brain’s structure – to learn two different tasks without overwriting connections learned from the first task. They accomplished this by interspersing focused training periods with sleep-like periods.
The researchers simulated sleep in the neural network by activating the network’s artificial neurons in a noisy pattern. They also ensured that the sleep-inspired noise roughly matched the pattern of neuron firing during the training sessions – a way of replaying and strengthening the connections learned from both tasks.
The team first tried training the neural network on the first task, followed by the second task, and then finally adding a sleep period at the end. But they quickly realized that this sequence still erased the neural network connections learned from the first task.
Instead, follow-up experiments showed that it was important to “have rapidly alternating sessions of training and sleep” while the AI was learning the second task, says Erik Delanois at the University of California, San Diego. This helped consolidate the connections from the first task that would have otherwise been forgotten.
Experiments showed how a spiking neural network trained in this way could enable an AI agent to learn two different foraging patterns in searching for simulated food particles while avoiding poisonous particles. More here.
Honorary contributors to DesPardes.com: Adil Khan, Ajaz Ahmed, Anwar Abbas, Arif Mirza, Aziz Ahmed, Bawar Tawfik, Dr. Razzak Ladha, Dr. Syed M. Ali, G. R. Baloch, Haseeb Warsi, Hasham Saddique, Jamil Usman, Javed Abbasi, Jawed Ahmed, Ishaq Saqi, Khalid Sharif, Majid Ahmed, Masroor Ali, Md. Ahmed, Md. Najibullah, Mushtaq Siddiqui,, Mustafa Jivanjee, Nusrat Jamshed, Shahbaz Ali, Shahid Hamza, Shahid Nayeem, Shareer Alam, Syed Ali Ammaar Jafrey, Syed Hamza Gilani, Shaheer Alam, Syed Hasan Javed, Syed M. Ali, Tahir Sohail, Talha Alam, Tariq Chaudhry, Usman Nazir, Yasir Raza