Open Access System for Information Sharing

Login Library

 

Article
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads
Full metadata record
Files in This Item:
There are no files associated with this item.
DC FieldValueLanguage
dc.contributor.authorKim, Innyoung-
dc.contributor.authorYou, Donghyun-
dc.date.accessioned2024-02-28T02:20:08Z-
dc.date.available2024-02-28T02:20:08Z-
dc.date.created2024-02-27-
dc.date.issued2024-01-
dc.identifier.issn2524-7905-
dc.identifier.urihttps://oasis.postech.ac.kr/handle/2014.oak/120451-
dc.description.abstractA method using deep reinforcement learning (DRL) to non-iteratively generate an optimal mesh for an arbitrary blade passage is developed. Despite automation in mesh generation using either an empirical approach or an optimization algorithm, repeated tuning of meshing parameters is still required for a new geometry. The method developed herein employs a DRL-based multi-condition optimization technique to define optimal meshing parameters as a function of the blade geometry, attaining automation, minimization of human intervention, and computational efficiency. The meshing parameters are optimized by training an elliptic mesh generator which generates a structured mesh for a blade passage with an arbitrary blade geometry. During each episode of the DRL process, the mesh generator is trained to produce an optimal mesh for a randomly selected blade passage by updating the meshing parameters until the mesh quality, as measured by the ratio of determinants of the Jacobian matrices and the skewness, reaches the highest level. Once the training is completed, the mesh generator creates an optimal mesh for a new arbitrary blade passage in a single try without an repetitive process for the parameter tuning for mesh generation from the scratch. The effectiveness and robustness of the proposed method are demonstrated through the generation of meshes for various blade passages.-
dc.languageEnglish-
dc.publisherSpringer Science and Business Media LLC-
dc.relation.isPartOfJMST Advances-
dc.titleFluid dynamic control and optimization using deep reinforcement learning-
dc.typeArticle-
dc.identifier.doi10.1007/s42791-024-00067-z-
dc.type.rimsART-
dc.identifier.bibliographicCitationJMST Advances, v.294-
dc.identifier.wosid001097312500001-
dc.citation.titleJMST Advances-
dc.citation.volume294-
dc.contributor.affiliatedAuthorKim, Innyoung-
dc.contributor.affiliatedAuthorYou, Donghyun-
dc.description.journalClass1-
dc.description.journalClass1-
dc.description.isOpenAccessN-
dc.type.docTypeArticle-
dc.subject.keywordPlusGRID GENERATION-
dc.subject.keywordPlusQUALITY METRICS-
dc.subject.keywordPlusOPTIMIZATION-
dc.subject.keywordPlusBOUNDARY-
dc.subject.keywordPlusERROR-
dc.subject.keywordPlusFLOW-
dc.subject.keywordAuthorMesh generation-
dc.subject.keywordAuthorMulti-condition optimization-
dc.subject.keywordAuthorDeep reinforcement learning-
dc.subject.keywordAuthorStructured mesh generation-
dc.subject.keywordAuthorBlade passage-
dc.relation.journalWebOfScienceCategoryComputer Science, Interdisciplinary Applications-
dc.relation.journalWebOfScienceCategoryPhysics, Mathematical-
dc.description.journalRegisteredClassscie-
dc.relation.journalResearchAreaComputer Science-
dc.relation.journalResearchAreaPhysics-

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher

유동현YOU, DONGHYUN
Dept of Mechanical Enginrg
Read more

Views & Downloads

Browse