Fighting Boredom in Recommender Systems with Linear Reinforcement Learning

Romain Warlop 1, 2 Alessandro Lazaric 3, 4 Jérémie Mary 5, 6
1 SEQUEL - Sequential Learning
Inria Lille - Nord Europe, CRIStAL - Centre de Recherche en Informatique, Signal et Automatique de Lille (CRIStAL) - UMR 9189
3 SEQUEL - Sequential Learning
LIFL - Laboratoire d'Informatique Fondamentale de Lille, LAGIS - Laboratoire d'Automatique, Génie Informatique et Signal, Inria Lille - Nord Europe
Abstract : A common assumption in recommender systems (RS) is the existence of a best fixed recommendation strategy. Such strategy may be simple and work at the item level (e.g., in multi-armed bandit it is assumed one best fixed arm/item exists) or implement more sophisticated RS (e.g., the objective of A/B testing is to find the best fixed RS and execute it thereafter). We argue that this assumption is rarely verified in practice, as the recommendation process itself may impact the user's preferences. For instance, a user may get bored by a strategy, while she may gain interest again, if enough time passed since the last time that strategy was used. In this case, a better approach consists in alternating different solutions at the right frequency to fully exploit their potential. In this paper, we first cast the problem as a Markov decision process, where the rewards are a linear function of the recent history of actions, and we show that a policy considering the long-term influence of the recommendations may outperform both fixed-action and contextual greedy policies. We then introduce an extension of the UCRL algorithm (LINUCRL) to effectively balance exploration and exploitation in an unknown environment, and we derive a regret bound that is independent of the number of states. Finally, we empirically validate the model assumptions and the algorithm in a number of realistic scenarios.
Type de document :
Article dans une revue
Neural Information Processing Systems, 2018
Liste complète des métadonnées

https://hal.inria.fr/hal-01915468
Contributeur : Romain Warlop <>
Soumis le : mercredi 7 novembre 2018 - 16:11:55
Dernière modification le : jeudi 8 novembre 2018 - 01:17:12

Fichier

WARLOP-NIPS18.pdf
Fichiers produits par l'(les) auteur(s)

Identifiants

  • HAL Id : hal-01915468, version 1

Collections

Citation

Romain Warlop, Alessandro Lazaric, Jérémie Mary. Fighting Boredom in Recommender Systems with Linear Reinforcement Learning. Neural Information Processing Systems, 2018. 〈hal-01915468〉

Partager

Métriques

Consultations de la notice

1