DC Field | Value | Language |
---|---|---|
dc.contributor.author | 이호진 | - |
dc.date.accessioned | 2022-03-29T03:30:52Z | - |
dc.date.available | 2022-03-29T03:30:52Z | - |
dc.date.issued | 2020 | - |
dc.identifier.other | OAK-2015-08982 | - |
dc.identifier.uri | http://postech.dcollection.net/common/orgView/200000334382 | ko_KR |
dc.identifier.uri | https://oasis.postech.ac.kr/handle/2014.oak/111787 | - |
dc.description | Master | - |
dc.description.abstract | To reduce model complexity of DNNs, pruning has been proposed to remove less important weights. The pruned networks is difficult to be accelerated on GPU due to the high irregularity, so the structure pruning was proposed, but its low degree of freedom lead the accuracy loss. Thus, sparsity-aware accelerators, which can utilize fine-grained pruning, has been proposed, and accelerator-aware pruning has also proposed to improve the performance of the accelerators. However, the current accelerator-aware pruning cannot consider both input and weight sparsity. In this thesis, we propose a group pruning algorithm that can operate on Cartesian product to deal with both input and weight sparsity, and achieve fine-grained level accuracy with a high degree of freedom. When we applied our algorithm to DNNs, there was little difference in accuracy compared to fine-grained pruning, and the accelerator with our algorithm achieved state of art speed-up. | - |
dc.language | eng | - |
dc.publisher | 포항공과대학교 | - |
dc.title | A Study on Group Pruning for Sparsity-Aware DNN Accelerator | - |
dc.title.alternative | 딥러닝 가속을 위한 그룹 프루닝 | - |
dc.type | Thesis | - |
dc.contributor.college | 일반대학원 전자전기공학과 | - |
dc.date.degree | 2020- 8 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
library@postech.ac.kr Tel: 054-279-2548
Copyrights © by 2017 Pohang University of Science ad Technology All right reserved.