Open Access System for Information Sharing

Login Library

 

Conference
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.authorRHU, MINSOO-
dc.contributor.authorGIMELSHEIN, NATALIA-
dc.contributor.authorCLEMONS, JASON-
dc.contributor.authorZULFIQAR, ARSLAN-
dc.contributor.authorKECKLER, STEPHEN-
dc.date.accessioned2018-05-11T04:22:50Z-
dc.date.available2018-05-11T04:22:50Z-
dc.date.created2018-03-29-
dc.date.issued2016-10-17-
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/43284-
dc.description.abstractThe most widely used machine learning frameworks require users to carefully tune their memory usage so that the deep neural network (DNN) fits into the DRAM capacity of a GPU. This restriction hampers a researcher's flexibility to study different machine learning algorithms, forcing them to either use a less desirable network architecture or parallelize the processing across multiple GPUs. We propose a runtime memory manager that virtualizes the memory usage of DNNs such that both GPU and CPU memory can simultaneously be utilized for training larger DNNs. Our virtualized DNN (vDNN) reduces the average GPU memory usage of AlexNet by up to 89%, OverFeat by 91%, and GoogLeNet by 95%, a significant reduction in memory requirements of DNNs. Similar experiments on VGG-16, one of the deepest and memory hungry DNNs to date, demonstrate the memory-efficiency of our proposal. vDNN enables VGG-16 with batch size 256 (requiring 28 GB of memory) to be trained on a single NVIDIA Titan X GPU card containing 12 GB of memory, with 18% performance loss compared to a hypothetical, oracular GPU with enough memory to hold the entire DNN.-
dc.publisherIEEE/ACM-
dc.relation.isPartOfIEEE/ACM International Symposium on Microarchitecture-
dc.relation.isPartOfProceedings of the 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO)-
dc.titlevDNN: Virtualized Deep Neural Networks for Scalable, Memory-Efficient Neural Network Design-
dc.typeConference-
dc.type.rimsCONF-
dc.identifier.bibliographicCitationIEEE/ACM International Symposium on Microarchitecture-
dc.citation.conferenceDate2016-10-15-
dc.citation.conferencePlaceCH-
dc.citation.titleIEEE/ACM International Symposium on Microarchitecture-
dc.contributor.affiliatedAuthorRHU, MINSOO-
dc.description.journalClass1-
dc.description.journalClass1-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher

유민수RHU, MINSOO
Dept of Computer Science & Enginrg
Read more

Views & Downloads

Browse