Open Access System for Information Sharing

Login Library

 

Conference
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.authorKim, Youngsok-
dc.contributor.authorJO, JAEEON-
dc.contributor.authorJANG, HANHWI-
dc.contributor.authorRHU, MINSOO-
dc.contributor.authorKIM, HANJUN-
dc.contributor.authorKim, Jangwoo-
dc.date.accessioned2018-05-11T00:38:38Z-
dc.date.available2018-05-11T00:38:38Z-
dc.date.created2017-09-18-
dc.date.issued2017-10-18-
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/42832-
dc.description.abstractGraphics Processing Unit (GPU) vendors have been scaling single-GPU architectures to satisfy the ever-increasing user demands for faster graphics processing. However, as it gets extremely difficult to further scale single-GPU architectures, the vendors are aiming to achieve the scaled performance by simultaneously using multiple GPUs connected with newly developed, fast inter-GPU networks (e.g., NVIDIA NVLink, AMD XDMA). With fast inter-GPU networks, it is now promising to employ split frame rendering (SFR) which improves both frame rate and single-frame latency by assigning disjoint regions of a frame to different GPUs. Unfortunately, the scalability of current SFR implementations is seriously limited as they suffer from a large amount of redundant computation among GPUs. This paper proposes GPUpd, a novel multi-GPU architecture for fast and scalable SFR. With small hardware extensions, GPUpd introduces a new graphics pipeline stage called Cooperative Projection & Distribution (C-PD) where all GPUs cooperatively project 3D objects to 2D screen and efficiently redistribute the objects to their corresponding GPUs. C-PD not only eliminates the redundant computation among GPUs, but also incurs minimal inter-GPU network traffic by transferring object IDs instead of mid-pipeline outcomes between GPUs. To further reduce the redistribution overheads, GPUpd minimizes inter-GPU synchronizations by implementing batching and runahead-execution of draw commands. Our detailed cycle-level simulations with 8 real-world game traces show that GPUpd achieves a geomean speedup of 4.98X in single-frame latency with 16 GPUs, whereas the current SFR implementations achieve only 3.07X geomean speedup which saturates on 4 or more GPUs.-
dc.publisherIEEE/ACM-
dc.relation.isPartOfInternational Symposium on Microarchitecture-
dc.relation.isPartOfProceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO)-
dc.titleGPUpd: A Fast and Scalable Multi-GPU Architecture Using Cooperative Projection and Distribution-
dc.typeConference-
dc.type.rimsCONF-
dc.identifier.bibliographicCitationInternational Symposium on Microarchitecture-
dc.citation.conferenceDate2017-10-14-
dc.citation.conferencePlaceUS-
dc.citation.titleInternational Symposium on Microarchitecture-
dc.contributor.affiliatedAuthorJO, JAEEON-
dc.contributor.affiliatedAuthorJANG, HANHWI-
dc.contributor.affiliatedAuthorRHU, MINSOO-
dc.contributor.affiliatedAuthorKIM, HANJUN-
dc.identifier.scopusid2-s2.0-85034065832-
dc.description.journalClass1-
dc.description.journalClass1-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher

김한준KIM, HANJUN
Dept. Convergence IT Engineering
Read more

Views & Downloads

Browse