Although transformer structure has become the defacto standard of the natural language processing task, it still has limited application in computer vision. In vision, attention is either combined with or replaces certain modules in a convolutional network, while keeping its overall framework intact. It proves that this reliance on CNNs is unnecessary for the pure transformer directly applied to image patch sequence can successfully classify the images. When large amounts of data are pre-trained and transmitted to multiple small and medium-sized image recognition benchmark (ImageNet, CIFAR-100, VTAB, etc.), the Vision Transformer (ViT) achieves remarkable results with less computational resources required than existing convolutional networks. Unlike the previous calculation that weighs attention, the model in this paper has been optimized, reducing the time complexity from O(n2) to O(n • logn), largely improving the model’s speed. Therefore, a new model is developed—Fast Vision Transformer (Fast VIT).
Fast Vision Transformer via Query Vector Decoupling
2021-10-20
1365557 byte
Conference paper
Electronic Resource
English
Decoupling and Vector Control of Induction Motor
British Library Online Contents | 2000
|Decoupling Recognition and Localization in CAD-Based Vision
British Library Conference Proceedings | 1994
|Motion estimation by decoupling rotation and translation in catadioptric vision
British Library Online Contents | 2010
|Omnidirectional decoupling annular vector tilt rotor aircraft and control method thereof
European Patent Office | 2024
|