Monday, October 01, 2001
11:00 AM
CSB 209
Junling Hu
U. Rochester Simon School
Multiagent Reinforcement Learning
In decentralized multiagent systems where each agent pursues its own goal, learning plays an important role to help an agent to adapt to a constant changing environment. The environment changes because other agents are learning at the same time. My research studies a particular learning method, reinforcement learning, that helps an agent to find optimal policy while information about the environment is incomplete.

Reinforcement learning was originally designed for single-agent systems. I extend this learning method, in theory and algorithm design, to multiagent systems. The theoretical foundation of reinforcement learning is Markov Decision process, while the theoretical foundation for multiagent reinforcement learning is stochastic games (also called Markov games). The optimal policy in stochastic games is no longer a policy that merely maximizes an agent's own payoff, instead it is a policy that belongs to an equilibrium solution, where each agent's policy is a best response to the others' policies. Such an equilibrium is also called Nash equilibrium. We have designed a reinforcement learning algorithm that provably converges to a Nash equilibrium solution. The convergence proof makes restrictions on the stage games during learning. In implementing this algorithm, we found that such restrictions can be significantly relaxed.

We have applied this multiagent reinforcement learning method to two domains. The first domain is grid-world games, where agents move in grids and try to reach their different goals. This domain serves as a benchmark domain, for its simplicity in modeling and deriving theoretical solutions. The second domain is E-commerce domain, where two online companies engage in price competition. Each company monitors the other's prices and makes decision on the best price it charges. In both domains, I show that our multiagent reinforcement learning method gives an agent considerable better performance than using simple single-agent reinforcement learning.