Research Post

No Regrets for Learning the Prior in Bandits


We propose 𝙰𝚍𝚊𝚃𝚂, a Thompson sampling algorithm that adapts sequentially to bandit tasks that it interacts with. The key idea in 𝙰𝚍𝚊𝚃𝚂 is to adapt to an unknown task prior distribution by maintaining a distribution over its parameters. When solving a bandit task, that uncertainty is marginalized out and properly accounted for. 𝙰𝚍𝚊𝚃𝚂 is a fully-Bayesian algorithm that can be implemented efficiently in several classes of bandit problems. We derive upper bounds on its Bayes regret that quantify the loss due to not knowing the task prior, and show that it is small. Our theory is supported by experiments, where 𝙰𝚍𝚊𝚃𝚂 outperforms prior algorithms and works well even in challenging real-world problems.

Latest Research Papers

Connect with the community

Get involved in Alberta's growing AI ecosystem! Speaker, sponsorship, and letter of support requests welcome.

Explore training and advanced education

Curious about study options under one of our researchers? Want more information on training opportunities?

Harness the potential of artificial intelligence

Let us know about your goals and challenges for AI adoption in your business. Our Investments & Partnerships team will be in touch shortly!