Open Access System for Information Sharing

Login Library

 

Thesis
Cited 0 time in webofscience Cited 0 time in scopus
Metadata Downloads

Confidence Calibration for Recommender Systems and Its Applications

Title
Confidence Calibration for Recommender Systems and Its Applications
Authors
권원빈
Date Issued
2024
Publisher
포항공과대학교
Abstract
Personalized recommendations have a significant impact on various daily activi- ties such as shopping, news, search, videos, and music. However, most recommender systems only display the top-scored items to the user without providing any indication of confidence in the recommendation results. The semantic of the same ranking posi- tion differs for each user; one user might like his third item with a probability of 30%, whereas the other user may like her third item with 90%. Despite the importance of having a measure of confidence in recommendation results, it has been surprisingly overlooked in the literature compared to the accuracy of the recommendation. In this dissertation, I propose a model calibration framework for recommender systems for estimating accurate confidence in recommendation re- sults based on the learned ranking scores. Moreover, I subsequently introduce two real-world applications of confidence on recommendations: (1) Training a small stu- dent model by treating the confidence of a big teacher model as additional learning guidance, (2) Adjusting the number of presented items based on the expected user utility estimated with calibrated probability. Obtaining Calibrated Confidence. I investigate various parametric distribu- tions and propose two parametric calibration methods, namely Gaussian calibration and Gamma calibration. Each proposed method can be seen as a post-processing func- tion that maps the ranking scores of pre-trained models to well-calibrated preference probabilities, without affecting the recommendation performance. Bidirectional Distillation. I propose Bidirectional Distillation (BD) framework whereby both the teacher and the student collaboratively improve with each other. Specifically, each model is trained with the distillation loss that makes it follow the other’s prediction confidence along with its original loss function. Trained in a bidi- rectional way, it turns out that both the teacher and the student are significantly im- proved compared to when being trained separately. Top-Personalized-K Recommendation. I introduce Top-Personalized-K Rec- ommendation (PerK), a new recommendation task aimed at generating a personalized- sized ranking list to maximize individual user satisfaction. PerK estimates the ex- pected user utility by leveraging calibrated interaction probabilities, subsequently se- lecting the recommendation size that maximizes this expected utility. We expect that Top-Personalized-K recommendation has the potential to offer enhanced solutions for various real-world recommendation scenarios, based on its great compatibility with existing models.
URI
http://postech.dcollection.net/common/orgView/200000733001
https://oasis.postech.ac.kr/handle/2014.oak/123274
Article Type
Thesis
Files in This Item:
There are no files associated with this item.

qr_code

  • mendeley

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.

Views & Downloads

Browse