Computer Studies Bldg. Room 209
In a multi-armed bandit problem, an online algorithm chooses
from a set of strategies in a sequence of trials so as to maximize the
total payoff of the chosen strategies. While the performance of bandit
algorithms with a small finite strategy set is quite well understood,
bandit problems with large strategy sets are still a topic of very
active investigation, motivated by practical applications such as online
auctions and web advertisement. The goal of such research is to
identify broad and natural classes of strategy sets and payoff functions
which enable the design of efficient solutions. In this work we study a
very general setting for the multi-armed bandit problem in which the
strategies form a metric space, and the payoff function satisfies a
Lipschitz condition with respect to the metric.
Joint work with Bobby Kleinberg and Eli Upfal.
Based on papers in STOC'08, SODA'10 and some recent work.
BIO: I am a researcher at Microsoft Research Silicon Valley. I
received my PhD from Cornell CS department, advised by Jon Kleinberg.
Then I was a postdoc at Brown, working with Eli Upfal. My research area
is algorithms and theory. Specific areas include: large networks,
metric spaces, learning theory, and algorithmic mechanism design.
Refreshments will be provided at 9:45AM