Efficient video tokenization remains a key bottleneck in learning general purpose
vision models that are capable of processing long video sequences. Prevailing
approaches are restricted to encoding videos to a fixed number of tokens, where too
few tokens will result in overly lossy encodings, and too many tokens will result
in prohibitively long sequence lengths. In this work, we introduce ElasticTok, a
method that conditions on prior frames to adaptively encode a frame into a variable
number of tokens. To enable this in a computationally scalable way, we propose
a masking technique that drops a random number of tokens at the end of each
frames’s token encoding. During inference, ElasticTok can dynamically allocate
tokens when needed – more complex data can leverage more tokens, while simpler
data only needs a few tokens. Our empirical evaluations on images and video
demonstrate the effectiveness of our approach in efficient token usage, paving the
way for future development of more powerful multimodal models, world models,
and agents.
Below, we show examples of the adaptive tokenization capabilities of ElasticTok. For each video, ground truth is the left image and reconstruction the right image. The bottom image shows the percentage of tokens used over each frame as the video plays. Typically, simpler scenes, or scenes with less motion will use fewer frames, while larger transitions such as fast motion or scene cuts will result in brief token usage spikes as our model adaptively encodes.
The following videos show different MSE reconstruction thresholds for the same video. The left video uses a more strict threshold of 0.001, which always requires all tokens due to the complexity of the video. The right video uses a higher threshold of 0.007 which is able to benefit more from the adaptivity of ElasticTok.