Results on CIFAR-10 & CIFAR-100.
<div><p>We introduce PiCCL (Primary Component Contrastive Learning), a self-supervised contrastive learning framework that utilizes a multiplex Siamese network structure consisting of many identical branches rather than 2 to maximize learning efficiency. PiCCL is simple and light weight,...
محفوظ في:
| المؤلف الرئيسي: | |
|---|---|
| مؤلفون آخرون: | , , , , |
| منشور في: |
2025
|
| الموضوعات: | |
| الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
| الملخص: | <div><p>We introduce PiCCL (Primary Component Contrastive Learning), a self-supervised contrastive learning framework that utilizes a multiplex Siamese network structure consisting of many identical branches rather than 2 to maximize learning efficiency. PiCCL is simple and light weight, it does not use asymmetric networks, intricate pretext tasks, hard to compute loss functions or multimodal data, which are common for multiview contrastive learning frameworks and could hinder performance, simplicity, generalizability and explainability. PiCCL obtains multiple positive samples by applying the same image augmentation paradigm to the same image numerous times, the network loss is calculated using a custom designed Loss function named PiCLoss (Primary Component Loss) to take advantage of PiCCL’s unique structure while keeping it computationally lightweight. To demonstrate its strength, we benchmarked PiCCL against various state-of-the-art self-supervised algorithms on multiple datasets including CIFAR-10, CIFAR-100, and STL-10. PiCCL achieved top performance in most of our tests, with top-1 accuracy of 94%, 72%, and 97% for the 3 datasets respectively. But where PiCCL excels is in the small batch learning scenarios. When testing on STL-10 using a batch size of 8, PiCCL still achieved 93% accuracy, outperforming the competition by about 3 percentage points.</p></div> |
|---|