Open Access System for Information Sharing

Login Library

 

Article
Cited 20 time in webofscience Cited 24 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.authorChoi, Inchang-
dc.contributor.authorBaek, Seung-Hwan-
dc.contributor.authorKim, Min H.-
dc.date.accessioned2022-02-23T12:40:45Z-
dc.date.available2022-02-23T12:40:45Z-
dc.date.created2022-02-20-
dc.date.issued2017-11-
dc.identifier.issn1057-7149-
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/109480-
dc.description.abstractFor extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.-
dc.languageEnglish-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.isPartOfIEEE Transactions on Image Processing-
dc.titleReconstructing Interlaced High-Dynamic-Range Video Using Joint Learning-
dc.typeArticle-
dc.identifier.doi10.1109/TIP.2017.2731211-
dc.type.rimsART-
dc.identifier.bibliographicCitationIEEE Transactions on Image Processing, v.26, no.11, pp.5353 - 5366-
dc.identifier.wosid000407969200020-
dc.citation.endPage5366-
dc.citation.number11-
dc.citation.startPage5353-
dc.citation.titleIEEE Transactions on Image Processing-
dc.citation.volume26-
dc.contributor.affiliatedAuthorBaek, Seung-Hwan-
dc.identifier.scopusid2-s2.0-85028882898-
dc.description.journalClass1-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.type.docTypeArticle-
dc.subject.keywordAuthordeinterlacing-
dc.subject.keywordAuthordenoising-
dc.subject.keywordAuthorhigh-dynamic-range video-
dc.subject.keywordAuthorImage reconstruction-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher

백승환BAEK, SEUNG HWAN
Dept of Computer Science & Enginrg
Read more

Views & Downloads

Browse