I am an assistant professor at Computer Science Department at the University of Arizona. My research focuses on sequential decision-making in feedback loops (i.e., the multi-armed bandit problem). I also work on online optimization and had some fun in the past with machine learning applied to psychology.

I was previously a postdoc with Francesco Orabona at Boston University as a “prebatical”. Before then, I spent 9 years at UW-Madison Wisconsin Institute for Discovery doing a postdoc with Robert Nowak, Rebecca Willett, and Stephen Wright, and a PhD degree with Xiaojin (Jerry) Zhu. I obtained a B.S. in computer science with a minor in mathematics from School of Computing, Soongsil University, South Korea.

  • E-mail: (kjun å† cs ∂ø† arizona ∂ø† edu),

  • Address: 746 Gould-Simpson, 1040 E. 4th St., Tucson, AZ 85721, USA.

multi-armed bandit

Multi-armed bandit is a state-less version of the reinforcement learning (RL). However, bandits usually enjoy stronger theoretical guarantees and have abundant real-world applications. Informally speaking, bandits learn to make better decisions over time in a feedback-loop. The decisions necessarily affect the feedback information, and the feedback data collected so far is no longer i.i.d.; most traditional learning guarantees do not apply.

Bandits are actively being studied in both theory and applications including deployable web service. Also, the cartoon caption contest of New Yorker is currently using a multi-armed bandit algorithm to efficiently crowdsource caption evaluations (this article)!


  • 10/17: At SILO, “Scalable Generalized Linear Bandits: Online Computation and Hashing”. [abstract]

  • 04/17: At AISTATS, “Improved Strongly Adaptive Online Learning using Coin Betting”.

  • 06/16: At CPCP Annual Retreat, “Multi-Armed Bandit Algorithms and Applications to Experiment Selection”. [abstract & video]

  • 03/16: At SILO, “Top Arm Identification in Multi-Armed Bandits with Batch Arm Pulls”. [abstract & video]

  • 03/16: At Soongsil University, two talks on human memory search.

  • 06/16: At ICML, “Anytime Exploration for Multi-armed Bandits using Confidence Information”. [video]

  • 11/15: At HAMLET (interdisciplinary seminar series at UW-Madison), “Measuring semantic structure from verbal fluency data with the initial-visit-emitting (INVITE) random walk”.

  • 03/15: At TTIC, “Learning from Human-Generated Lists”.

  • 06/13: At ICML, “Learning from Human-Generated Lists”. [video]


  • Program Committee / Reviewer: AISTATS’20 reviewer, AAAI’20 Area Chair, ICML’19, COLT’19 (subreviewer), IJCAI’19, NeurIPS’19, IEEE Transactions on Signal Processing, ICML’19, AISTATS’19 ,NeurIPS’18, AISTATS’18, AAAI’18, NeurIPS’17, ICML’17, COLT’17 (subreviewer), AISTATS’17, ICML’16.