Open Access System for Information Sharing

Login Library

 

Article
Cited 15 time in webofscience Cited 18 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.authorKwon S.-
dc.contributor.authorGo B.-H.-
dc.contributor.authorLee J.-H.-
dc.date.accessioned2021-12-03T09:21:30Z-
dc.date.available2021-12-03T09:21:30Z-
dc.date.created2020-07-23-
dc.date.issued2020-08-
dc.identifier.issn0167-8655-
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/107860-
dc.description.abstractWe introduce a novel multimodal machine translation model that integrates image features modulated by its caption. Generally, images contain vastly more information rather than just their description. Furthermore, in multimodal machine translation task, feature maps are commonly extracted from pre-trained network for objects. Therefore, it is not appropriate to utilize these feature map directly. To extract the visual features associated with the text, we design a modulation network based on the textual information from the encoder and visual information from the pretrained CNN. However, because multimodal translation data is scarce, using overly complicated models could result in poor performance. For simplicity, we apply a feature-wise multiplicative transformation. Therefore, our model is a modular trainable network embedded in the architecture in existing multimodal translation models. We verified our model by conducting experiments on the Transformer model with the Multi30k dataset and evaluating translation quality using the BLEU and METEOR metrics. In general, our model was an improvements over a text-based model and other existing models. (C) 2020 Elsevier B.V. All rights reserved.-
dc.languageEnglish-
dc.publisherELSEVIER-
dc.relation.isPartOfPATTERN RECOGNITION LETTERS-
dc.titleA text-based visual context modulation neural model for multimodal machine translation-
dc.typeArticle-
dc.identifier.doi10.1016/j.patrec.2020.06.010-
dc.type.rimsART-
dc.identifier.bibliographicCitationPATTERN RECOGNITION LETTERS, v.136, pp.212 - 218-
dc.identifier.wosid000553824800003-
dc.citation.endPage218-
dc.citation.startPage212-
dc.citation.titlePATTERN RECOGNITION LETTERS-
dc.citation.volume136-
dc.contributor.affiliatedAuthorKwon S.-
dc.contributor.affiliatedAuthorGo B.-H.-
dc.contributor.affiliatedAuthorLee J.-H.-
dc.identifier.scopusid2-s2.0-85086641687-
dc.description.journalClass1-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.type.docTypeArticle-
dc.subject.keywordAuthorDeep learning-
dc.subject.keywordAuthorMachine translation-
dc.subject.keywordAuthorMultimodality-
dc.relation.journalWebOfScienceCategoryComputer Science, Artificial Intelligence-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher

이종혁LEE, JONG HYEOK
Grad. School of AI
Read more

Views & Downloads

Browse