ElasticTok: Adaptive Tokenization for Image and Video

Wilson Yan* † Matei Zaharia Volodymyr Mnih Pieter Abbeel Aleksandra Faust Hao Liu* ◊ 
    UC Berkeley, Google DeepMind

Abstract

Efficient video tokenization remains a key bottleneck in learning general purpose vision models that are capable of processing long video sequences. Prevailing approaches are restricted to encoding videos to a fixed number of tokens, where too few tokens will result in overly lossy encodings, and too many tokens will result in prohibitively long sequence lengths. In this work, we introduce ElasticTok, a method that conditions on prior frames to adaptively encode a frame into a variable number of tokens. To enable this in a computationally scalable way, we propose a masking technique that drops a random number of tokens at the end of each frames’s token encoding. During inference, ElasticTok can dynamically allocate tokens when needed – more complex data can leverage more tokens, while simpler data only needs a few tokens. Our empirical evaluations on images and video demonstrate the effectiveness of our approach in efficient token usage, paving the way for future development of more powerful multimodal models, world models, and agents.



Adaptive Tokenizing in a Block Causal Structure

Model

Figure 2. ElasticTok adaptively encodes image and video to variable length outputs based on the complexity of the input data. Single block uses an Encoder-Decoder pipeline with a sampled latent mask. Multi-block extends this with a Block Causal Mask to handle longer video sequences.



Adaptive Tokenization with ElasticTok

Data Mixture

Figure 4. Performance comparison between baseline and ElasticTok on ImageNet and Video. The y-axis shows the percentage of samples that satisfy the reconstruction threshold, while the x-axis represents the percentage of tokens used. (Left) On image, ElasticTok achieves a 3.5x and 1.3x efficiency boost at different target reconstruction thresholds. (Right) On video, ElasticTok shows a 5x and 2.4x improvement over the baseline, maintaining superior performance while using fewer tokens.



Adaptive Encoding Examples


Below, we show examples of the adaptive tokenization capabilities of ElasticTok. For each video, ground truth is the left image and reconstruction the right image. The bottom image shows the percentage of tokens used over each frame as the video plays. Typically, simpler scenes, or scenes with less motion will use fewer frames, while larger transitions such as fast motion or scene cuts will result in brief token usage spikes as our model adaptively encodes.

Short Video



Long Video



Varying Reconstruction Threshold

The following videos show different MSE reconstruction thresholds for the same video. The left video uses a more strict threshold of 0.001, which always requires all tokens due to the complexity of the video. The right video uses a higher threshold of 0.007 which is able to benefit more from the adaptivity of ElasticTok.