Understanding MLP-Mixer as a wide and sparse MLP through random permutation matrix
📎 Migration note: 7 inline media block(s) (image/file) were not migrated by scripts/copy-post-bodies-to-notes.ts — Notion's API can only re-attach external URLs, not re-upload internal files. The originals remain on the legacy Posts Public page until Phase 8.
Understanding MLP-Mixer as wide and sparse MLP through Random Permutation Matices
Tomohiro Hayase Talk at Non-Commutative Probability Theory, Random Matrix Theory and their Applications(NPRM2023) 2023/11/08--09 Preprint:
Table of Contents
- Effective Expression of MLP-Mixer
- Preliminaries
- Symmirarity between MLP and MLP-Mixer
- Effective Width
- Monarch Matrices
- Alternative to static sparse weight MLP
- Random Permuted Mixers
- Revisit the similarity in wider cases
- Conclusion and Future Works
1. Effective Expression of MLP-Mixer
Introduction
Research Question: Why MLP-Mixer has higher performance than usual MLP? Our Answer: Because layers in MLP-Mixer are extremely wide MLP.
Preliminaries
MLP
MLP (multilayer-perceptron) is a composition of transforms in the form of
where is a parameter matrix ( the transforms do not share the parameter matrices).
Static Mask: Consider a matrix of entries 0 or 1 and replace W by M \odot W:
MLP-Mixer
NeurIPS2021, Tolstikhin, et.al
The structure is less structured than Convolutional Neural Networks or Vision Transformers.
Blocks of MLP-Mixer:
where , , , .
Symmirarity between MLP-Mixer and MLP via vectorization
Vectorization and effective width
We represent the vectorization operation of the matrix matrix by ; more precisely,
In other words, the map is the representation
We also define an inverse operation to recover the matrix representation.
There exists a well-known equation for the vectorization operation and the tensor ( or Kronecker) product denoted by ;
for and .
As discussed later, the aforementioned equation corresponds to the vectorization of an MLP-Mixer block with a linear activation function.
The vectorization of the feature matrix is equivalent to a fully connected layer of width
with a weight matrix . We refer to this as the *effective width *of mixing layers.
Under vectorization of feature matrices
Channel-Mixing layer is converted into :
Token-Mixing layer is converted into:
In MLP-Mixer, when we treat each feature matrix as an -dimensional vector , the right multiplication by an weight and the left weight multiplication by a weight are represented as \begin{align}
This expression clarifies that the mixing layers work as an MLP with special weight matrices with the tensor product. As usual,
Mixer is equivalent to an extremely wide MLP
Moreover, the ratio of non-zero entries in the weight matrix is and that of is .
e.g. Block-matrix rep:
Therefore, the weight of the effective MLP is highly sparse.
Commutation Matrix
Furthermore, to consider only the left multiplication of weights, we introduce commutation matrices:
A commutation matrix is defined as
where is an matrix. Note that for nay entry-wise function ,
Note that
Effective Expression of MLP-Mixer: Channel-MLP Block:
Token-MLP Block:
MLP with static-mask
Static Mask: Consider a matrix of entries 0 or 1 distributed and replace W in each layer of MLP by :
- The mask matrix is fixed durring the trainining.
Hidden features and test accuracy
To validate the similarity of networks in a robust and scalable way, we look at the similarity of hidden features of MLPs with sparse weights and MLP-Mixers based on the centered kernel alignment (CKA) Nguyen T., Raghu M, Kornblith S.
In practice, we computed the mini-batch CKA [Section~3.1(2)](Ngueyen 2021) among features of trained networks.