Open Access System for Information Sharing

Login Library

 

Article
Cited 19 time in webofscience Cited 35 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.authorKim, B.-
dc.contributor.authorRyu, S.-
dc.contributor.authorLee, G.G.-
dc.date.accessioned2018-07-17T10:45:55Z-
dc.date.available2018-07-17T10:45:55Z-
dc.date.created2017-12-21-
dc.date.issued2017-05-
dc.identifier.issn1380-7501-
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/92105-
dc.description.abstractThis paper presents a system to detect multiple intents (MIs) in an input sentence when only single-intent (SI)-labeled training data are available. To solve the problem, this paper categorizes input sentences into three types and uses a two-stage approach in which each stage attempts to detect MIs in different types of sentences. In the first stage, the system generates MI hypotheses based on conjunctions in the input sentence, then evaluates the hypotheses and then selects the best one that satisfies specified conditions. In the second stage, the system applies sequence labeling to mark intents on the input sentence. The sequence labeling model is trained based on SI-labeled training data. In experiments, the proposed two-stage MI detection method reduced errors for written and spoken input by 20.54 and 17.34?% respectively. ? 2016, Springer Science+Business Media New York.-
dc.languageEnglish-
dc.publisherSPRINGER-
dc.relation.isPartOfMULTIMEDIA TOOLS AND APPLICATIONS-
dc.subjectComputational linguistics-
dc.subjectDetection methods-
dc.subjectIntent detection-
dc.subjectLabeled training data-
dc.subjectSequence Labeling-
dc.subjectSpoken dialog systems-
dc.subjectSpoken language understanding-
dc.subjectTwo stage approach-
dc.subjectSpeech recognition-
dc.titleTwo-stage multi-intent detection for spoken language understanding-
dc.typeArticle-
dc.identifier.doi10.1007/s11042-016-3724-4-
dc.type.rimsART-
dc.identifier.bibliographicCitationMULTIMEDIA TOOLS AND APPLICATIONS, v.76, no.9, pp.11377 - 11390-
dc.identifier.wosid000400845000015-
dc.citation.endPage11390-
dc.citation.number9-
dc.citation.startPage11377-
dc.citation.titleMULTIMEDIA TOOLS AND APPLICATIONS-
dc.citation.volume76-
dc.contributor.affiliatedAuthorRyu, S.-
dc.contributor.affiliatedAuthorLee, G.G.-
dc.identifier.scopusid2-s2.0-84978112111-
dc.description.journalClass1-
dc.description.journalClass1-
dc.type.docTypeArticle-
dc.subject.keywordPlusDIALOG MANAGEMENT-
dc.subject.keywordAuthorSpoken dialog system-
dc.subject.keywordAuthorSpoken language understanding-
dc.subject.keywordAuthorMulti-intent detection-
dc.relation.journalWebOfScienceCategoryComputer Science, Information Systems-
dc.relation.journalWebOfScienceCategoryComputer Science, Software Engineering-
dc.relation.journalWebOfScienceCategoryComputer Science, Theory & Methods-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaEngineering-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Views & Downloads

Browse