DC Field | Value | Language |
---|---|---|
dc.contributor.author | 권혁준 | - |
dc.date.accessioned | 2022-03-29T02:48:57Z | - |
dc.date.available | 2022-03-29T02:48:57Z | - |
dc.date.issued | 2021 | - |
dc.identifier.other | OAK-2015-08247 | - |
dc.identifier.uri | http://postech.dcollection.net/common/orgView/200000366536 | ko_KR |
dc.identifier.uri | https://oasis.postech.ac.kr/handle/2014.oak/111052 | - |
dc.description | Master | - |
dc.description.abstract | 본 논문은 가중치 밀집도가 낮은 신경망에서의 가중치 스케줄 효율을 높이기 위해 1) 가중치 채널 병합(channel-merging)을 통해 밀집도가 높은 신경망을 만들어 스케줄 효율을 높이고, 2) 병합된 가중치 채널을 다룰 수 있는 하드웨어 가속기를 제안 및 평가한다. | - |
dc.description.abstract | In this thesis, a channel-merging offline scheduling scheme is presented for improving the efficiency of the previous offline scheduler in highly pruned convolutional neural networks (CNN). In the channel-merging step, two channels in the same layers are merged lane-wise to increase the network’s channel-level sparsity. Also, a modified hardware architecture is presented to handle merged and scheduled weights. With the zero-skip and outlier-aware scheduling schemes of the previous accelerator, the proposed merging and scheduling method achieve more lane utilization and speedup. Despite a little area overhead of the proposed hardware, fast calculation and reduced memory access make the energy consumption lower than the previous hardware. | - |
dc.language | eng | - |
dc.publisher | 포항공과대학교 | - |
dc.title | A Channel Merging Approach to Control Sparsity in Neural Networks | - |
dc.title.alternative | 채널 병합을 통한 신경망 밀집도 제어 | - |
dc.type | Thesis | - |
dc.contributor.college | 일반대학원 전자전기공학과 | - |
dc.date.degree | 2021- 2 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
library@postech.ac.kr Tel: 054-279-2548
Copyrights © by 2017 Pohang University of Science ad Technology All right reserved.