Open Access System for Information Sharing

Login Library

 

Article
Cited 4 time in webofscience Cited 6 time in scopus
Metadata Downloads

Prior preference learning from experts: Designing a reward with active inference SCIE SCOPUS

Title
Prior preference learning from experts: Designing a reward with active inference
Authors
Shin, Jin YoungKim, CheolhyeongHwang, Hyung Ju
Date Issued
2022-07
Publisher
ELSEVIER
Abstract
Active inference may be defined as Bayesian modeling of a brain with a biologically plausible model of the agent. Its primary idea relies on the free energy principle and the prior preference of the agent. An agent will choose an action that leads to its prior preference for a future observation. In this paper, we claim that active inference can be interpreted using reinforcement learning (RL) algorithms and find a theoretical connection between them. We extend the concept of expected free energy (EFE), which is a core quantity in active inference, and claim that EFE can be treated as a negative value function. Motivated by the concept of prior preference and a theoretical connection, we propose a simple but novel method for learning a prior preference from experts. This illustrates that the problem with inverse RL can be approached with a new perspective of active inference. Experimental results of prior preference learning show the possibility of active inference with EFE-based rewards and its application to an inverse RL problem. (c) 2021 Elsevier B.V. All rights reserved.
URI
https://oasis.postech.ac.kr/handle/2014.oak/117906
DOI
10.1016/j.neucom.2021.12.042
ISSN
0925-2312
Article Type
Article
Citation
NEUROCOMPUTING, vol. 492, page. 508 - 515, 2022-07
Files in This Item:
There are no files associated with this item.

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Related Researcher

Researcher

황형주HWANG, HYUNG JU
Dept of Mathematics
Read more

Views & Downloads

Browse