MCPC learns efficient generative models of sensory inputs.

<p><b>a</b>, Distributions learned by MCPC and PC in the linear model given in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1012532#pcbi.1012532.g001" target="_blank">Fig 1a</a> after 375 parameter updates. <b>b<...

وصف كامل

محفوظ في:
التفاصيل البيبلوغرافية
المؤلف الرئيسي: Gaspard Oliviers (19976466) (author)
مؤلفون آخرون: Rafal Bogacz (307953) (author), Alexander Meulemans (19976469) (author)
منشور في: 2024
الموضوعات:
الوسوم: إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
الوصف
الملخص:<p><b>a</b>, Distributions learned by MCPC and PC in the linear model given in <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1012532#pcbi.1012532.g001" target="_blank">Fig 1a</a> after 375 parameter updates. <b>b</b>,<b>c</b>, Evolution of the weight <i>W</i><sub>0</sub> and prior mean <i>μ</i> parameter of the linear model during training with MCPC (<b>b</b>) and PC (<b>c</b>). The optimal model parameter values are marked as hollow dots. The vector field shows the expected gradient flow of the parameters. The additional curves reveal nullclines where the parameter update for the weight or the prior mean parameter equals zero (see <a href="http://www.ploscompbiol.org/article/info:doi/10.1371/journal.pcbi.1012532#pcbi.1012532.s001" target="_blank">S1 Appendix</a> for derivations). <b>d</b>, Comparison between samples obtained from models trained with MCPC and PC on MNIST, as well as from a DLGM trained on MNIST. The samples are obtained by ancestrally sampling the models for PC and the DLGM and by sampling the spontaneous neural activity for MCPC. <b>e</b>, Comparison between masked images reconstructed by MCPC, PC, and a DLGM. We reconstruct the images by obtaining a Maximum a-posteriori estimate of the missing pixel values.</p>