Wasserstein GAN (WGAN)

This is an implementation of Wasserstein GAN.

The original GAN loss is based on Jensen-Shannon (JS) divergence between the real distribution $\mathbb{P}_r$ and generated distribution $\mathbb{P}_g$. The Wasserstein GAN is based on Earth Mover distance between these distributions.

$\Pi(\mathbb{P}_r, \mathbb{P}_g)$ is the set of all joint distributions, whose marginal probabilities are $\gamma(x, y)$.

$\mathbb{E}_{(x,y) \sim \gamma} \Vert x - y \Vert$ is the earth mover distance for a given joint distribution ($x$ and $y$ are probabilities).

So $W(\mathbb{P}_r, \mathbb{P}g)$ is equal to the least earth mover distance for any joint distribution between the real distribution $\mathbb{P}_r$ and generated distribution $\mathbb{P}_g$.

The paper shows that Jensen-Shannon (JS) divergence and other measures for the difference between two probability distributions are not smooth. And therefore if we are doing gradient descent on one of the probability distributions (parameterized) it will not converge.

Based on Kantorovich-Rubinstein duality,

where $\Vert f \Vert_L \le 1$ are all 1-Lipschitz functions.

That is, it is equal to the greatest difference among all 1-Lipschitz functions.

For $K$-Lipschitz functions,

If all $K$-Lipschitz functions can be represented as $f_w$ where $f$ is parameterized by $w \in \mathcal{W}$,

If $(\mathbb{P}_{g})$ is represented by a generator and $z$ is from a known distribution $z \sim p(z)$,

Now to converge $g_\theta$ with $\mathbb{P}_{r}$ we can gradient descent on $\theta$ to minimize above formula.

Similarly we can find $\max_{w \in \mathcal{W}}$ by ascending on $w$, while keeping $K$ bounded. One way to keep $K$ bounded is to clip all weights in the neural network that defines $f$ clipped within a range.

Here is the code to try this on a simple MNIST generation experiment.

Open In Colab

87import torch.utils.data
88from torch.nn import functional as F
89
90from labml_helpers.module import Module

Discriminator Loss

We want to find $w$ to maximize , so we minimize,

93class DiscriminatorLoss(Module):
  • f_real is $f_w(x)$
  • f_fake is $f_w(g_\theta(z))$

This returns the a tuple with losses for $f_w(x)$ and $f_w(g_\theta(z))$, which are later added. They are kept separate for logging.

104    def __call__(self, f_real: torch.Tensor, f_fake: torch.Tensor):

We use ReLUs to clip the loss to keep $f \in [-1, +1]$ range.

115        return F.relu(1 - f_real).mean(), F.relu(1 + f_fake).mean()

Generator Loss

We want to find $\theta$ to minimize The first component is independent of $\theta$, so we minimize,

118class GeneratorLoss(Module):
  • f_fake is $f_w(g_\theta(z))$
130    def __call__(self, f_fake: torch.Tensor):
134        return -f_fake.mean()