Transformer XL

This is an implementation of Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context in PyTorch.

Transformer has a limited attention span, equal to the length of the sequence trained in parallel. All these positions have a fixed positional encoding. Transformer XL increases this attention span by letting each of the positions pay attention to precalculated past embeddings. For instance if the context length is , it will keep the embeddings of all layers for previous batch of length and feed them to current step. If we use fixed-positional encodings these pre-calculated embeddings will have the same positions as the current context. They introduce relative positional encoding, where the positional encodings are introduced at the attention calculation.

Annotated implementation of relative multi-headed attention is in relative_mha.py .

Here's the training code and a notebook for training a transformer XL model on Tiny Shakespeare dataset.

Open In Colab

35from typing import List, Optional
36
37import torch
38import torch.nn as nn
39
40from labml_helpers.module import Module
41from labml_nn.utils import clone_module_list
42from .relative_mha import RelativeMultiHeadAttention
43from ..feed_forward import FeedForward

Transformer XL Layer

The transformer XL model comprises of a number of these layers.

46class TransformerXLLayer(Module):
  • d_model is the token embedding size
  • self_attn is the self attention module
  • feed_forward is the feed forward module
  • dropout_prob is the probability of dropping out after self attention and FFN
52    def __init__(self, *,
53                 d_model: int,
54                 self_attn: RelativeMultiHeadAttention,
55                 feed_forward: FeedForward,
56                 dropout_prob: float):
63        super().__init__()
64        self.size = d_model
65        self.self_attn = self_attn
66        self.feed_forward = feed_forward
67        self.dropout = nn.Dropout(dropout_prob)
68        self.norm_self_attn = nn.LayerNorm([d_model])
69        self.norm_ff = nn.LayerNorm([d_model])
  • x is a tensor of the token level feature vectors of shape [seq_len, batch_size, d_model]
  • mem is a tensor of the past token level feature vectors of shape [mem_len, batch_size, d_model]
  • mask is a matrix of shape [seq_len, mem_len + seq_len, batch_size] or [seq_len, mem_len + seq_len, 1] . mask[i, j] is true if token at i can see token at j .
71    def forward(self, *,
72                x: torch.Tensor,
73                mem: Optional[torch.Tensor],
74                mask: torch.Tensor):

Normalize the vectors before doing self attention

82        z = self.norm_self_attn(x)

If there is memory

84        if mem is not None:

Normalize it

86            mem = self.norm_self_attn(mem)

Concatenate with z

88            m_z = torch.cat((mem, z), dim=0)

Ignore if there is no memory

90        else:
91            m_z = z

Attention

93        self_attn = self.self_attn(query=z, key=m_z, value=m_z, mask=mask)

Add the attention results

95        x = x + self.dropout(self_attn)

Normalize for feed-forward

98        z = self.norm_ff(x)

Pass through the feed-forward network

100        ff = self.feed_forward(z)

Add the feed-forward results back

102        x = x + self.dropout(ff)

105        return x

Transformer XL Model

This consists of multiple transformer XL layers

108class TransformerXL(Module):
115    def __init__(self, layer: TransformerXLLayer, n_layers: int):
116        super().__init__()

Make copies of the transformer layer

118        self.layers = clone_module_list(layer, n_layers)

Final normalization layer

120        self.norm = nn.LayerNorm([layer.size])
  • x is a tensor of the token embeddings vectors of shape [seq_len, batch_size, d_model]
  • mem is a list of tensors of the past token level feature vectors of shape [mem_len, batch_size, d_model] for each layer
  • mask is the masking matrix
122    def forward(self, x: torch.Tensor, mem: List[torch.Tensor], mask: torch.Tensor):

List to store token level feature vectors, which will become the memories for the next sequential batch.

131        new_mem = []

Run through each transformer layer

133        for i, layer in enumerate(self.layers):

Add to the list of feature vectors

135            new_mem.append(x.detach())

Memory

137            m = mem[i] if mem else None

Run through the transformer XL layer

139            x = layer(x=x, mem=m, mask=mask)

Finally, normalize the vectors

141        return self.norm(x), new_mem