Atari 2600: Pong with DQN

In this notebook we solve the PongDeterministic-v4 environment using deep Q-learning (DQN). We’ll use a convolutional neural net (without pooling) as our function approximator for the Q-function, see AtariQ.

This notebook periodically generates GIFs, so that we can inspect how the training is progressing.

After a few hundred episodes, this is what you can expect:

Beating Atari 2600 Pong after a few hundred episodes.

To view the notebook in a new tab, click here. To interact with the notebook in Google Colab, hit the “Open in Colab” button below.

Open in Colab