Anil Ananthaswamy in Quanta: For researchers interested in the intersection of animal and machine intelligence, “supervised learning” might be limited in what it can reveal about biological brains. Animals — including humans — don’t use labeled data sets to learn. For the most part, they explore the environment on their own, and in doing so, they gain a rich and robust understanding of the world.
Now some computational neuroscientists have begun to explore neural networks that have been trained with little or no human-labeled data. These “self-supervised learning” algorithms have proved enormously successful at modeling human language and, more recently, image recognition. In recent work, computational models of the mammalian visual and auditory systems built using self-supervised learning models have shown a closer correspondence to brain function than their supervised-learning counterparts. To some neuroscientists, it seems as if the artificial networks are beginning to reveal some of the actual methods our brains use to learn. More here.
Honorary contributors to DesPardes: Adil Khan, Ajaz Ahmed, Anwar Abbas, Arif Mirza, Aziz Ahmed, Bawar Tawfik, Dr. Razzak Ladha, Dr. Syed M. Ali, G. R. Baloch, Haseeb Warsi, Hasham Saddique, Jamil Usman, Javed Abbasi, Jawed Ahmed, Ishaq Saqi, Khalid Sharif, Majid Ahmed, Masroor Ali, Md. Ahmed, Md. Najibullah, Mushtaq Siddiqui,, Mustafa Jivanjee, Nusrat Jamshed, Shahbaz Ali, Shahid Hamza, Shahid Nayeem, Shareer Alam, Syed Ali Ammaar Jafrey, Syed Hamza Gilani, Shaheer Alam, Syed Hasan Javed, Syed M. Ali, Tahir Sohail, Tariq Chaudhry, Usman Nazir