Video is an ubiquitous supply of media content material that touches on many points of individuals’s day-to-day lives. More and more, real-world video functions, corresponding to video captioning, video content material evaluation, and video question-answering (VideoQA), depend on fashions that may join video content material with textual content or pure language. VideoQA is especially difficult, nonetheless, because it requires greedy each semantic info, corresponding to objects in a scene, in addition to temporal info, e.g., how issues transfer and work together, each of which have to be taken within the context of a natural-language query that holds particular intent. As well as, as a result of movies have many frames, processing all of them to be taught spatio-temporal info may be computationally costly. Nonetheless, understanding all this info permits fashions to reply advanced questions — for instance, within the video beneath, a query in regards to the second ingredient poured within the bowl requires figuring out objects (the substances), actions (pouring), and temporal ordering (second).
An instance enter query for the VideoQA job “What’s the second ingredient poured into the bowl?” which requires deeper understanding of each the visible and textual content inputs. The video is an instance from the 50 Salads dataset, used beneath the Artistic Commons license. |
To deal with this, in “Video Query Answering with Iterative Video-Textual content Co-Tokenization”, we introduce a brand new method to video-text studying known as iterative co-tokenization, which is ready to effectively fuse spatial, temporal and language info for VideoQA. This method is multi-stream, processing completely different scale movies with impartial spine fashions for every to provide video representations that seize completely different options, e.g., these of excessive spatial decision or lengthy temporal durations. The mannequin then applies the co-tokenization module to be taught environment friendly representations from fusing the video streams with the textual content. This mannequin is extremely environment friendly, utilizing solely 67 giga-FLOPs (GFLOPs), which is at the very least 50% fewer than earlier approaches, whereas giving higher efficiency than different state-of-the-art fashions.
Video-Textual content Iterative Co-tokenization
The primary aim of the mannequin is to provide options from each movies and textual content (i.e., the person query), collectively permitting their corresponding inputs to work together. A second aim is to take action in an environment friendly method, which is extremely necessary for movies since they comprise tens to a whole bunch of frames as enter.
The mannequin learns to tokenize the joint video-language inputs right into a smaller set of tokens that collectively and effectively characterize each modalities. When tokenizing, we use each modalities to provide a joint compact illustration, which is fed to a transformer layer to provide the subsequent degree illustration. A problem right here, which can also be typical in cross-modal studying, is that usually the video body doesn’t correspond on to the related textual content. We tackle this by including two learnable linear layers which unify the visible and textual content characteristic dimensions earlier than tokenization. This manner we allow each video and textual content to situation how video tokens are realized.
Furthermore, a single tokenization step doesn’t permit for additional interplay between the 2 modalities. For that, we use this new characteristic illustration to work together with the video enter options and produce one other set of tokenized options, that are then fed into the subsequent transformer layer. This iterative course of permits the creation of recent options, or tokens, which characterize a continuing refinement of the joint illustration from each modalities. On the final step the options are enter to a decoder that generates the textual content output.
As typically accomplished for VideoQA, we pre-train the mannequin earlier than fine-tuning it on the person VideoQA datasets. On this work we use the movies mechanically annotated with textual content based mostly on speech recognition, utilizing the HowTo100M dataset as an alternative of pre-training on a big VideoQA dataset. This weaker pre-training knowledge nonetheless permits our mannequin to be taught video-text options.
Environment friendly Video Query-Answering
We apply the video-language iterative co-tokenization algorithm to a few essential VideoQA benchmarks, MSRVTT-QA, MSVD-QA and IVQA, and display that this method achieves higher outcomes than different state-of-the-art fashions, whereas having a modest dimension. Moreover, iterative co-tokenization studying yields important compute financial savings for video-text studying duties. The strategy makes use of solely 67 giga-FLOPs (GFLOPS), which is one sixth the 360 GFLOPS wanted when utilizing the favored 3D-ResNet video mannequin collectively with textual content and is greater than twice as environment friendly because the X3D mannequin. That is all of the whereas producing extremely correct outcomes, outperforming state-of-the-art strategies.
Comparability of our iterative co-tokenization method to earlier strategies corresponding to MERLOT and VQA-T, in addition to, baselines utilizing single ResNet-3D or X3D-XL. |
Multi-stream Video Inputs
For VideoQA, or any of plenty of different duties that contain video inputs, we discover that multi-stream enter is necessary to extra precisely reply questions on each spatial and temporal relationships. Our method makes use of three video streams at completely different resolutions and frame-rates: a low-resolution excessive frame-rate, enter video stream (with 32 frames-per-second and spatial decision 64×64, which we denote as 32x64x64); a high-resolution, low frame-rate video (8x224x224); and one in-between (16x112x112). Regardless of the apparently extra voluminous info to course of with three streams, we acquire very environment friendly fashions as a result of iterative co-tokenization method. On the identical time these extra streams permit extraction of essentially the most pertinent info. For instance, as proven within the determine beneath, questions associated to a particular exercise in time will produce increased activations within the smaller decision however excessive frame-rate video enter, whereas questions associated to the final exercise may be answered from the excessive decision enter with only a few frames. One other advantage of this algorithm is that the tokenization adjustments relying on the questions requested.
Conclusion
We current a brand new method to video-language studying that focuses on joint studying throughout video-text modalities. We tackle the necessary and difficult job of video question-answering. Our method is each extremely environment friendly and correct, outperforming present state-of-the-art fashions, regardless of being extra environment friendly. Our method leads to modest mannequin sizes and may acquire additional enhancements with bigger fashions and knowledge. We hope this work provokes extra analysis in vision-language studying to allow extra seamless interplay with vision-based media.
Acknowledgements
This work is performed by AJ Pierviovanni, Kairo Morton, Weicheng Kuo, Michael Ryoo and Anelia Angelova. We thank our collaborators on this analysis, and Soravit Changpinyo for helpful feedback and options, and Claire Cui for options and assist. We additionally thank Tom Small for visualizations.