Cooperation usually enables agents to receive a higher payoff than non-cooperative ones. This research is to explore the cooperative opportunities in unknown games. A hill-climbing exploration approach is proposed for agents to take their opponents' responses into consideration, and maximize the payoffs by gradually adapting their strategy to their opponents' behaviors in iterated games. Simulations show that the agents can efficiently learn to cooperate with or compete against each other as the situation demands. Also, the agents are able to tolerate noise in environments and exploit weak opponents.
Assuming that the utility of each state-action pair is a stochastic process allows us to describe the trade-off dilemma as a Brownian bandit problem to formalize recency-based exploration bonus in non-stationary environments. To demonstrate the performance of exploration bonus, we build agents using Q-learning algorithm with a smoothed best response dynamics. The simulations show that the agents can efficiently adapt to changes in their opponents' behaviors whereas the same algorithm, using Boltzmann exploration, can not adapt. This work focuses on typical simultaneous games that represent phenomena of competition or cooperation in multi-agent environments, such as work-and-shirk game.