Experiment 3.

<p><b>(A)</b> Contrastive Learning Through Time (CLTT) model. Each image is passed through a ResNet backbone, preserving the temporal order of images. Encoded features are aligned in the feature space using a temporal learning window of 3 frames. This window mimics the spike-timing...

Full description

Saved in:
Bibliographic Details
Main Author: Lalit Pandey (13195488) (author)
Other Authors: Donsuk Lee (20371293) (author), Samantha M. W. Wood (7506305) (author), Justin N. Wood (7506308) (author)
Published: 2024
Subjects:
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:<p><b>(A)</b> Contrastive Learning Through Time (CLTT) model. Each image is passed through a ResNet backbone, preserving the temporal order of images. Encoded features are aligned in the feature space using a temporal learning window of 3 frames. This window mimics the spike-timing-dependent plasticity learning window of biological visual systems (~300 ms). <b>(B)</b> View-invariant recognition performance of newborn chicks and SimCLR-CLTT models. We evaluated two architecture sizes (4-layer and 10-layer), across the four rearing conditions presented to the chicks. The red horizontal line shows the chicks’ performance. CNNs showed substantial learning gains over untrained CNN performance (untrained 4-layer CNN performance = 52.5%; untrained 10-layer CNN performance = 60.1%). CNNs can leverage time as a teaching signal to learn in impoverished environments. Error bars represent standard error of model performances across validation folds.</p>