Open Access System for Information Sharing

Login Library

 

Conference
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.authorKIM, SEYOUNG-
dc.date.accessioned2020-04-14T01:53:00Z-
dc.date.available2020-04-14T01:53:00Z-
dc.date.created2020-04-13-
dc.date.issued2018-06-19-
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/103417-
dc.description.abstractWe report a capacitor-based cross-point array that can be used to train analog-based Deep Neural Networks (DNNs), fabricated with trench capacitors in 14nm technology. The fundamental DNN functionalities of multiply-accumulate and weight-update are demonstrated. We also demonstrate the best symmetry and linearity ever reported for an analog cross-point array system. For DNNs, the capacitor leakage does not impact learning accuracy even without any refresh cycle, as the weights are continuously updated during training. This makes capacitor an ideal candidate for neural network training. We also discuss the scalability of this array using optimized low-leakage DRAM technology.-
dc.publisherInstitute of Electrical and Electronics Engineers-
dc.relation.isPartOf2018 IEEE Symposium on VLSI Technology-
dc.relation.isPartOf2018 IEEE Symposium on VLSI Technology-
dc.titleCapacitor-based Cross-point Array for Analog Neural Network with Record Symmetry and Linearity-
dc.typeConference-
dc.type.rimsCONF-
dc.identifier.bibliographicCitation2018 IEEE Symposium on VLSI Technology-
dc.citation.conferenceDate2018-06-18-
dc.citation.conferencePlaceUS-
dc.citation.title2018 IEEE Symposium on VLSI Technology-
dc.contributor.affiliatedAuthorKIM, SEYOUNG-
dc.description.journalClass1-
dc.description.journalClass1-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads

Browse