Open Access System for Information Sharing

Login Library

 

Article
Cited 24 time in webofscience Cited 42 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.authorLee, S-
dc.contributor.authorKim, GJ-
dc.contributor.authorChoi, S-
dc.date.accessioned2016-04-01T08:37:34Z-
dc.date.available2016-04-01T08:37:34Z-
dc.date.created2009-08-24-
dc.date.issued2009-01-
dc.identifier.issn1077-2626-
dc.identifier.other2009-OAK-0000018227-
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/28399-
dc.description.abstractThis paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments (VIES). In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive VIES. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in VIES, without any hardware for head or eye tracking.-
dc.description.statementofresponsibilityX-
dc.languageEnglish-
dc.publisherIEEE COMPUTER SOC-
dc.relation.isPartOfIEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS-
dc.subjectVisual attention-
dc.subjectsaliency map-
dc.subjectbottom-up feature-
dc.subjecttop-down context-
dc.subjectvirtual environment-
dc.subjectlevel of detail-
dc.subjectGAZE CONTROL-
dc.subjectATTENTION-
dc.subjectMODEL-
dc.titleReal-Time Tracking of Visually Attended Objects in Virtual Environments and Its Application to LOD-
dc.typeArticle-
dc.contributor.college컴퓨터공학과-
dc.identifier.doi10.1109/TVCG.2008.82-
dc.author.googleLee, S-
dc.author.googleKim, GJ-
dc.author.googleChoi, S-
dc.relation.volume15-
dc.relation.issue1-
dc.relation.startpage6-
dc.relation.lastpage19-
dc.contributor.id10127373-
dc.relation.journalIEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS-
dc.relation.indexSCI급, SCOPUS 등재논문-
dc.relation.sciSCI-
dc.collections.nameJournal Papers-
dc.type.rimsART-
dc.identifier.bibliographicCitationIEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, v.15, no.1, pp.6 - 19-
dc.identifier.wosid000265437500003-
dc.date.tcdate2019-02-01-
dc.citation.endPage19-
dc.citation.number1-
dc.citation.startPage6-
dc.citation.titleIEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS-
dc.citation.volume15-
dc.contributor.affiliatedAuthorChoi, S-
dc.identifier.scopusid2-s2.0-59449087750-
dc.description.journalClass1-
dc.description.journalClass1-
dc.description.wostc16-
dc.description.scptc29*
dc.date.scptcdate2018-05-121*
dc.type.docTypeArticle; Proceedings Paper-
dc.subject.keywordPlusGAZE CONTROL-
dc.subject.keywordPlusATTENTION-
dc.subject.keywordPlusMODEL-
dc.subject.keywordAuthorVisual attention-
dc.subject.keywordAuthorsaliency map-
dc.subject.keywordAuthorbottom-up feature-
dc.subject.keywordAuthortop-down context-
dc.subject.keywordAuthorvirtual environment-
dc.subject.keywordAuthorlevel of detail-
dc.relation.journalWebOfScienceCategoryComputer Science, Software Engineering-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Views & Downloads

Browse