Open Access System for Information Sharing

Login Library

 

Article
Cited 22 time in webofscience Cited 24 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.authorKim, Beomseok-
dc.contributor.authorSon, Hyeongseok-
dc.contributor.authorPark, Seong-Jin-
dc.contributor.authorCho, Sunghyun-
dc.contributor.authorLee, Seungyong-
dc.date.accessioned2018-12-13T07:42:21Z-
dc.date.available2018-12-13T07:42:21Z-
dc.date.created2018-11-12-
dc.date.issued2018-10-
dc.identifier.issn0167-7055-
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/94494-
dc.description.abstractWe propose a novel approach for detecting two kinds of partial blur, defocus and motion blur, by training a deep convolutional neural network. Existing blur detection methods concentrate on designing low-level features, but those features have difficulty in detecting blur in homogeneous regions without enough textures or edges. To handle such regions, we propose a deep encoder-decoder network with long residual skip-connections and multi-scale reconstruction loss functions to exploit high-level contextual features as well as low-level structural features. Another difficulty in partial blur detection is that there are no available datasets with images having both defocus and motion blur together, as most existing approaches concentrate only on either defocus or motion blur. To resolve this issue, we construct a synthetic dataset that consists of complex scenes with both types of blur. Experimental results show that our approach effectively detects and classifies blur, outperforming other state-of-the-art methods. Our method can be used for various applications, such as photo editing, blur magnification, and deblurring.-
dc.languageEnglish-
dc.publisherWILEY-
dc.relation.isPartOfCOMPUTER GRAPHICS FORUM-
dc.titleDefocus and Motion Blur Detection with Deep Contextual Features-
dc.typeArticle-
dc.identifier.doi10.1111/cgf.13567-
dc.type.rimsART-
dc.identifier.bibliographicCitationCOMPUTER GRAPHICS FORUM, v.37, no.7, pp.277 - 288-
dc.identifier.wosid000448166700026-
dc.citation.endPage288-
dc.citation.number7-
dc.citation.startPage277-
dc.citation.titleCOMPUTER GRAPHICS FORUM-
dc.citation.volume37-
dc.contributor.affiliatedAuthorPark, Seong-Jin-
dc.contributor.affiliatedAuthorCho, Sunghyun-
dc.contributor.affiliatedAuthorLee, Seungyong-
dc.identifier.scopusid2-s2.0-85055429571-
dc.description.journalClass1-
dc.description.journalClass1-
dc.description.wostc0-
dc.description.isOpenAccessN-
dc.type.docTypeArticle; Proceedings Paper-
dc.subject.keywordPlusSINGLE IMAGE-
dc.subject.keywordPlusSHAKEN-
dc.relation.journalWebOfScienceCategoryComputer Science, Software Engineering-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Views & Downloads

Browse