Open Access System for Information Sharing

Login Library

 

Thesis
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.author이호진-
dc.date.accessioned2022-03-29T03:30:52Z-
dc.date.available2022-03-29T03:30:52Z-
dc.date.issued2020-
dc.identifier.otherOAK-2015-08982-
dc.identifier.urihttp://postech.dcollection.net/common/orgView/200000334382ko_KR
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/111787-
dc.descriptionMaster-
dc.description.abstractTo reduce model complexity of DNNs, pruning has been proposed to remove less important weights. The pruned networks is difficult to be accelerated on GPU due to the high irregularity, so the structure pruning was proposed, but its low degree of freedom lead the accuracy loss. Thus, sparsity-aware accelerators, which can utilize fine-grained pruning, has been proposed, and accelerator-aware pruning has also proposed to improve the performance of the accelerators. However, the current accelerator-aware pruning cannot consider both input and weight sparsity. In this thesis, we propose a group pruning algorithm that can operate on Cartesian product to deal with both input and weight sparsity, and achieve fine-grained level accuracy with a high degree of freedom. When we applied our algorithm to DNNs, there was little difference in accuracy compared to fine-grained pruning, and the accelerator with our algorithm achieved state of art speed-up.-
dc.languageeng-
dc.publisher포항공과대학교-
dc.titleA Study on Group Pruning for Sparsity-Aware DNN Accelerator-
dc.title.alternative딥러닝 가속을 위한 그룹 프루닝-
dc.typeThesis-
dc.contributor.college일반대학원 전자전기공학과-
dc.date.degree2020- 8-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads

Browse