Open Access System for Information Sharing

Login Library

 

Conference
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.authorLee, Doyup-
dc.contributor.authorKim, Sungwoong-
dc.contributor.authorKim, Ildoo-
dc.contributor.authorCheon, Yeongjae-
dc.contributor.authorCHO, MINSU-
dc.contributor.authorHan, Wook-Shin-
dc.date.accessioned2024-05-08T05:54:08Z-
dc.date.available2024-05-08T05:54:08Z-
dc.date.created2024-04-16-
dc.date.issued2022-06-20-
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/123226-
dc.description.abstractConsistency regularization on label predictions becomes a fundamental technique in semi-supervised learning, but it still requires a large number of training iterations for high performance. In this study, we analyze that the consistency regularization restricts the propagation of labeling information due to the exclusion of samples with unconfident pseudo-labels in the model updates. Then, we propose contrastive regularization to improve both efficiency and accuracy of the consistency regularization by well-clustered features of unlabeled data. In specific, after strongly augmented samples are assigned to clusters by their pseudolabels, our contrastive regularization updates the model so that the features with confident pseudo-labels aggregate the features in the same cluster, while pushing away features in different clusters. As a result, the information of confident pseudo-labels can be effectively propagated into more unlabeled samples during training by the well-clustered features. On benchmarks of semi-supervised learning tasks, our contrastive regularization improves the previous consistency-based methods and achieves state-ofthe-art results, especially with fewer training iterations. Our method also shows robust performance on open-set semi-supervised learning where unlabeled data includes out-of-distribution samples.-
dc.languageEnglish-
dc.publisherIEEE / CVF-
dc.relation.isPartOfInternational Workshop on Learning with Limited Labelled Data for Image and Video Understanding (L3D-IVU), CVPR 2022-
dc.relation.isPartOfIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops-
dc.titleContrastive Regularization for Semi-Supervised Learning-
dc.typeConference-
dc.type.rimsCONF-
dc.identifier.bibliographicCitationInternational Workshop on Learning with Limited Labelled Data for Image and Video Understanding (L3D-IVU), CVPR 2022, pp.3910 - 3919-
dc.citation.conferenceDate2022-06-19-
dc.citation.conferencePlaceUS-
dc.citation.endPage3919-
dc.citation.startPage3910-
dc.citation.titleInternational Workshop on Learning with Limited Labelled Data for Image and Video Understanding (L3D-IVU), CVPR 2022-
dc.contributor.affiliatedAuthorLee, Doyup-
dc.contributor.affiliatedAuthorCHO, MINSU-
dc.contributor.affiliatedAuthorHan, Wook-Shin-
dc.description.journalClass1-
dc.description.journalClass1-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher

한욱신HAN, WOOK SHIN
Grad. School of AI
Read more

Views & Downloads

Browse