Until I read the recent paper at ICML 2017, I hadn’t heard of the Gumbel trick. There is surprisingly little online about the Gumbel trick—related to the more popular Gumbel-max trick—so here we go.
We often want to characterize probabilistic models in discrete situations. The Gumbel trick allows us to estimate an associated partition function $ Z$
with relative ease. At a high level, finding $ Z$
or even $ \log Z$
is very difficult; however, we can add some noise and compute the maximum a posteriori (MAP) more easily through approximation methods. If we repeat this process enough times, we get a reliable estimate of $ Z$
.
In complexity theory, we know that finding the MAP is NP-hard but can be approximated quickly in practice. Note that the partition function is a harder even still, containing #P-hard problems.
Let’s formalize. For finite sample $ \mathcal{X}$
of size $ N$
, we define an unnormalized mass function $ \tilde{p} : \mathcal{X} \to [0, \infty)$
and let $ Z:= \sum_{x \in \mathcal{X}} \tilde{p}(x)$
be the normalizing partition function. We then define $ \phi(x) := \ln \tilde{p}(x)$
as the log-unnormalized probabilities or the potential function.
Our algorithm is then:
$ x \in \mathcal{X}$
. Call this value $ z_i$
$ \hat{Z} \approx Z$
We want to prove the supposedly useful Gumbel trick then using the Perturb-and-MAP method, specifically
$$ \max_{x \in \mathcal{X}} \left( \phi(x) + \gamma(x) \right) \sim \text{Gumbel}(-c + \ln Z)$$
where $\phi(x)$
has been defined as the potentials and $ \gamma \sim \text{Gumbel}(-c)$
where $ c$
is the Euler-Mascheroni constant.
Because the mean of $ \text{Gumbel}(\mu)$
is $ \mu + c$
, we can show that $ \ln Z$
and then $ Z$
are recoverable.
The Gumbel distribution is traditionally used to model the maxima of already extreme events. For example, what will be worst earthquake next year given the measurements of the worst earthquakes in the past 10 years in San Francisco?
A variable $ X$
drawn from $ \text{Gumbel}(\mu)$
has the probability distribution
$$ f(x) = e^{-(z + e^{-z})}$$
where $ z = x - \mu$
. For this problem, we set the scale parameter $ \beta = 1$
whereas the location parameter $ \mu$
remains free.
Additionally, the cumulative distribution function of a $ \text{Gumbel}(\mu)$
is
$$ F(x) = e^{-e^{-(x-\mu)}}$$
This will become useful!
We want to find the value of $ x$
that maximizes $ \phi(x) + \gamma(x)$
. Thinking in terms of the CDF, we want all values of $ x \in \mathcal{X}$
to produce smaller or equal values
$$\begin{align*} P\left(\max_{x \in \mathcal{X}} \left( \phi(x) + \gamma(x) \right)\right) &= \prod_{x \in \mathcal{X}} F(t - \phi(x))\\ &= \exp \left ( - \exp \left (\sum_{x \in \mathcal{X}} -(t - \phi(x) + c) \right ) \right ) \\ &= \exp \left ( - Z \exp \left (-(t +c) \right ) \right ) \\ &= \exp \left ( - \exp \left (-(t +c - \ln Z) \right ) \right ) \\ &\Rightarrow F(t) \text{ where } t \sim \text{Gumbel}(-c + \ln Z) \end{align*}$$
The first equality follows from multiplying the Gumbel CDF $ F(t)$
of $ \gamma(x)$
of all possible values to capture the maximum. The second equality comes from expanding out the Gumbel CDF. The third equality consolidates the potential functions such that $ Z = \sum_{x \in \mathcal{X}} \phi(x)$
. The fourth equality sticks the $ \ln Z$
back into the $ \exp$
function. The last equality compresses the probability back into the Gumbel CDF, except set at a different location.
I enjoyed and modeled this post after the proof from Hazan and Jaakkola 2012; however, Matej et al 2017 has another proof using a cleverly chosen $ g$
function and competing exponential clocks.