Open Access System for Information Sharing

Login Library

 

Article
Cited 8 time in webofscience Cited 11 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.authorKim, Sehyeon-
dc.contributor.authorShin, Dae Youp-
dc.contributor.authorKim, Taekyung-
dc.contributor.authorLee, Sangsook-
dc.contributor.authorHyun, Jung Keun-
dc.contributor.authorPark, Sung-Min-
dc.date.accessioned2022-01-28T05:00:14Z-
dc.date.available2022-01-28T05:00:14Z-
dc.date.created2022-01-26-
dc.date.issued2022-01-
dc.identifier.issn1424-8220-
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/109228-
dc.description.abstractMotion classification can be performed using biometric signals recorded by electroencephalography (EEG) or electromyography (EMG) with noninvasive surface electrodes for the control of prosthetic arms. However, current single-modal EEG and EMG based motion classification techniques are limited owing to the complexity and noise of EEG signals, and the electrode placement bias, and low-resolution of EMG signals. We herein propose a novel system of two-dimensional (2D) input image feature multimodal fusion based on an EEG/EMG-signal transfer learning (TL) paradigm for detection of hand movements in transforearm amputees. A feature extraction method in the frequency domain of the EEG and EMG signals was adopted to establish a 2D image. The input images were used for training on a model based on the convolutional neural network algorithm and TL, which requires 2D images as input data. For the purpose of data acquisition, five transforearm amputees and nine healthy controls were recruited. Compared with the conventional single-modal EEG signal trained models, the proposed multimodal fusion method significantly improved classification accuracy in both the control and patient groups. When the two signals were combined and used in the pretrained model for EEG TL, the classification accuracy increased by 4.18–4.35% in the control group, and by 2.51–3.00% in the patient group.-
dc.languageEnglish-
dc.publisherMultidisciplinary Digital Publishing Institute (MDPI)-
dc.relation.isPartOfSensors-
dc.titleEnhanced Recognition of Amputated Wrist and Hand Movements by Deep Learning Method Using Multimodal Fusion of Electromyography and Electroencephalography-
dc.typeArticle-
dc.identifier.doi10.3390/s22020680-
dc.type.rimsART-
dc.identifier.bibliographicCitationSensors, v.22, no.2-
dc.identifier.wosid000748231200001-
dc.citation.number2-
dc.citation.titleSensors-
dc.citation.volume22-
dc.contributor.affiliatedAuthorKim, Sehyeon-
dc.contributor.affiliatedAuthorPark, Sung-Min-
dc.identifier.scopusid2-s2.0-85122886087-
dc.description.journalClass1-
dc.description.journalClass1-
dc.description.isOpenAccessY-
dc.type.docTypeArticle-
dc.subject.keywordPlusNEURAL-NETWORKS-
dc.subject.keywordPlusMOTOR IMAGERY-
dc.subject.keywordPlusCLASSIFICATION-
dc.subject.keywordPlusPROSTHESIS-
dc.subject.keywordPlusSELECTION-
dc.subject.keywordPlusSIGNALS-
dc.subject.keywordAuthorbrain-computer interface (BCI)-
dc.subject.keywordAuthorconvolutional neural network (CNN)-
dc.subject.keywordAuthorelectroencephalography (EEG)-
dc.subject.keywordAuthorelectromyography (EMG)-
dc.subject.keywordAuthortransforearm amputees-
dc.subject.keywordAuthortransfer learning (TL)-
dc.relation.journalWebOfScienceCategoryChemistry, Analytical-
dc.relation.journalWebOfScienceCategoryEngineering, Electrical & Electronic-
dc.relation.journalWebOfScienceCategoryInstruments & Instrumentation-
dc.description.journalRegisteredClassscie-
dc.description.journalRegisteredClassscopus-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher

박성민PARK, SUNG MIN
Dept. Convergence IT Engineering
Read more

Views & Downloads

Browse