DC Field | Value | Language |
---|---|---|
dc.contributor.author | KIM, SEYOUNG | - |
dc.date.accessioned | 2020-04-14T01:53:00Z | - |
dc.date.available | 2020-04-14T01:53:00Z | - |
dc.date.created | 2020-04-13 | - |
dc.date.issued | 2018-06-19 | - |
dc.identifier.uri | https://oasis.postech.ac.kr/handle/2014.oak/103417 | - |
dc.description.abstract | We report a capacitor-based cross-point array that can be used to train analog-based Deep Neural Networks (DNNs), fabricated with trench capacitors in 14nm technology. The fundamental DNN functionalities of multiply-accumulate and weight-update are demonstrated. We also demonstrate the best symmetry and linearity ever reported for an analog cross-point array system. For DNNs, the capacitor leakage does not impact learning accuracy even without any refresh cycle, as the weights are continuously updated during training. This makes capacitor an ideal candidate for neural network training. We also discuss the scalability of this array using optimized low-leakage DRAM technology. | - |
dc.publisher | Institute of Electrical and Electronics Engineers | - |
dc.relation.isPartOf | 2018 IEEE Symposium on VLSI Technology | - |
dc.relation.isPartOf | 2018 IEEE Symposium on VLSI Technology | - |
dc.title | Capacitor-based Cross-point Array for Analog Neural Network with Record Symmetry and Linearity | - |
dc.type | Conference | - |
dc.type.rims | CONF | - |
dc.identifier.bibliographicCitation | 2018 IEEE Symposium on VLSI Technology | - |
dc.citation.conferenceDate | 2018-06-18 | - |
dc.citation.conferencePlace | US | - |
dc.citation.title | 2018 IEEE Symposium on VLSI Technology | - |
dc.contributor.affiliatedAuthor | KIM, SEYOUNG | - |
dc.description.journalClass | 1 | - |
dc.description.journalClass | 1 | - |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
library@postech.ac.kr Tel: 054-279-2548
Copyrights © by 2017 Pohang University of Science ad Technology All right reserved.