DC Field | Value | Language |
---|---|---|
dc.contributor.author | Min, Jung Gyu | - |
dc.contributor.author | Kam, Dongyun | - |
dc.contributor.author | Byun, Younghoon | - |
dc.contributor.author | Park, Gunho | - |
dc.contributor.author | Lee, Youngjoo | - |
dc.date.accessioned | 2024-03-06T01:05:34Z | - |
dc.date.available | 2024-03-06T01:05:34Z | - |
dc.date.created | 2024-02-21 | - |
dc.date.issued | 2023-08-07 | - |
dc.identifier.uri | https://oasis.postech.ac.kr/handle/2014.oak/121287 | - |
dc.description.abstract | Based on recent RISC-V designs, we present in this paper a low-power vector processor architecture for efficiently deploying vision transformer (ViT) models. To fairly measure the processing efficiency of different processor designs with instruction/data cache memories, we first develop the evaluation framework based on numerous design tools for jointly considering the algorithm, architecture, and circuit performances together, numerically revealing that the previous CSR-based data compression cannot accelerate pruned transformer models at all due to under-utilization of the vector-extended processing units. We then introduce a series of algorithm-hardware co-optimization approaches to greatly minimize cache misses by applying 1) the accuracy-preserved structured ViT pruning, 2) the vertical-CSR (vCSR) data storing format, and 3) vCSR-aware custom memory-accessing instructions. Experimental results show that the proposed optimization schemes eventually improve the processing efficiency of pruned transformers in resource-limited computing platforms, e.g., achieving 11 times lower energy consumption for handling the 0.7-pruned ViT model. | - |
dc.language | English | - |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | - |
dc.relation.isPartOf | 2023 IEEE/ACM International Symposium on Low Power Electronics and Design, ISLPED 2023 | - |
dc.relation.isPartOf | Proceedings of the International Symposium on Low Power Electronics and Design | - |
dc.title | Energy-Efficient RISC-V-Based Vector Processor for Cache-Aware Structurally-Pruned Transformers | - |
dc.type | Conference | - |
dc.type.rims | CONF | - |
dc.identifier.bibliographicCitation | 2023 IEEE/ACM International Symposium on Low Power Electronics and Design, ISLPED 2023 | - |
dc.citation.conferenceDate | 2023-08-07 | - |
dc.citation.conferencePlace | AU | - |
dc.citation.title | 2023 IEEE/ACM International Symposium on Low Power Electronics and Design, ISLPED 2023 | - |
dc.contributor.affiliatedAuthor | Min, Jung Gyu | - |
dc.contributor.affiliatedAuthor | Kam, Dongyun | - |
dc.contributor.affiliatedAuthor | Byun, Younghoon | - |
dc.contributor.affiliatedAuthor | Park, Gunho | - |
dc.contributor.affiliatedAuthor | Lee, Youngjoo | - |
dc.description.journalClass | 1 | - |
dc.description.journalClass | 1 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
library@postech.ac.kr Tel: 054-279-2548
Copyrights © by 2017 Pohang University of Science ad Technology All right reserved.