Connect with us

Tech

Pre-training of a simple visual language model with a weak director

Published

on

 



Posted by: Zirui Wang, Student Researcher, Yuan Cao Research Scientist, Google Research, Brain Team

Visual language modeling provides the basis for language comprehension in the corresponding visual input and can be useful in developing important products and tools. For example, an image caption model generates a natural language description based on an understanding of a particular image. While these cross-modal tasks pose challenges, the adoption of effective visual language pre-training (VLP) has made significant progress in visual language modeling over the past few years. This approach aims to learn a single feature space from both visual and linguistic inputs, rather than learning two separate feature spaces, one for visual input and one for linguistic input. To this end, existing VLPs often leverage object detectors such as Faster R-CNN and train with labeled object detection datasets to isolate region of interest (ROI) and are task-specific. It depends on the approach (that is, the task-specific loss function). ) Learn the expression of images and texts together. Such an approach reduces scalability because it requires annotated datasets or time to design a task-specific approach.

To address this challenge, “SimVLM: Pre-training of a simple visual language model with weak monitoring” proposes a minimal and effective VLP named SimVLM, which stands for “simple visual language model”. increase. SimVLM, like language modeling, is a large number of weakly aligned image and text pairs that are trained end-to-end for a unified purpose (that is, the text paired with an image is not necessarily the image’s. Not an exact description). SimVLM’s simplicity enables efficient training on such scaled datasets, helping models achieve state-of-the-art performance across six visual language benchmarks. In addition, SimVLM delivers powerful zero-shot cross-modality transfers, including tasks such as open-ended visual question answering, image captions, and multimodal transformations, without tweaking or just tweaking textual data. Learn integrated multimodal expressions that enable you.

Models and Pre-Training Procedures Unlike existing VLP methods, which employ pre-training procedures similar to masked language modeling (such as BERT), SimVLM employs an intersequence framework and is a single prefix language model (PrefixLM). Trained for the purpose. It takes the beginning of the sequence (prefix) as input and predicts its continuation. For example, given the sequence “dog chasing yellow ball”, the sequence is randomly truncated to “dog chasing” as a prefix, and the model predicts its continuation. The concept of prefix applies to images as well, the image is divided into several “patches”, and a subset of those patches are sequentially fed to the model as input. This is called an “image patch sequence”. In SimVLM, for multimodal inputs (such as images and their captions), the prefix is ​​a concatenation of both the image patch sequence and the prefix text sequence received by the encoder. The decoder then predicts the continuation of the text sequence. Compared to previous VLP models, which combine several pre-training losses, Prefix LM loss is the only training goal and greatly simplifies the training process. This SimVLM approach maximizes flexibility and universality when dealing with different task settings.

Finally, we succeeded in both language and vision tasks such as BERT and ViT, so we adopted the Transformer architecture as the backbone of our model. This allows the model to capture the raw image directly, unlike previous ROI-based VLP approaches. input. In addition, inspired by CoAtNet, we employ a convolution stage consisting of the first three blocks of ResNet to extract contextualized patches. This has an advantage over the nave linear projection of the original ViT model. The overall model architecture is shown below.

SimVLM model architecture overview.

The model is pre-trained on a large web dataset for both image text input and text-only input. For collaborative vision and linguistic data, we use ALIGN’s training set, which contains approximately 1.8 billion noisy image and text pairs. For text-only data, use the Colossal Clean Crawled Corpus (C4) dataset introduced by T5, with a total of 800G of web-crawled documents.

Benchmark Results After pre-training, fine-tune your model with multimodal tasks such as VQA, NLVR2, SNLI-VE, COCO captions, NoCaps and Multi30KEn-De. For example, in the case of VQA, the model takes an image about the input image and the corresponding question and produces an answer as output. Evaluate SimVLM models of three different sizes (base: 86M parameters, large: 307M, and huge: 632M) according to the same setup as ViT. Comparing the results with strong existing baselines such as LXMERT, VL-T5, UNITER, OSCAR, Villa, SOHO, UNIMO, VinVL, SimVLM is the best of all these tasks, albeit much simpler. You can see that it achieves cutting-edge performance.

VQA NLVR2 SNLI-VECoCo Caption Model test-devtest-std dev test-P dev test B @ 4 MCS LXMERT 72.4 72.5 74.9 74.5 — — — VL-T5-70.3 74.6 73.6 — — 116.5 –UNITER 73.8 74 79.1 80 79.4 79.4 — — OSCAR 73.6 73.8 79.1 80.4 — 41.7 30.6 14024.5 Villa 74.774.9 79.8 81.5 80.2 80 — — SOHO 73.3 73.5 76.4 77.3 85 85 — — UNIMO 75.1 75.3 — 81.1 80.6 39.6 –127.7 –VinVL 76.6 76.6 82.7 84 — 41 31.1 140.9 25.2 SimVLM Base 77.978.1 81.7 81.8 84.2 84.2 39 32.9 134.8 24SimVLM Large 79.379.6 84.1 84.8 85.7 85.6 40.3 33.4 142.6 24.7SimVLM Giant 8080.3 84.5 85.2 86.2 86.3 40.6 33.7 143.3 25.4 Evaluation results for a subset of 6 visual language benchmarks compared to existing baseline models. Metrics used above (higher is better): BLEU-4 (B @ 4), METEOR (M), CIDEr (C), SPICE (S). Similarly, the NoCaps and Multi30k En-De ratings show cutting-edge performance.

Zero Shot Generalization SimVLM is trained with a large amount of data from both visual and text modality, so it is interesting to ask if a zero shot cross-modality transfer can be performed. To this end, we explore the model in multiple tasks, including image captions, multilingual captions, open-ended VQA, and visual text completion. Use pre-trained SimVLM to fine-tune only the text data or decode it directly for multimodal input without full fine-tuning. The following figure shows some examples. It turns out that this model can generate German explanations as well as high quality image captions, enabling simultaneous inter-language and inter-modality transfers.

An example of SimVLM zero shot generalization. (A) Zero-shot image caption: Given an image with a text prompt, the pre-trained model predicts the content of the image without tweaking. (B) Zero-shot cross-modality transfer with German image captions: The model produces captions in German, even if it has never been fine-tuned with German image caption data. (C) Generated VQA: The model can generate answers outside of the original VQA dataset candidates. (D) Completion of zero-shot visual text: The pre-trained model completes the text description based on the content of the image. (E) Zero Shot Open End VQA: The model provides factual answers to image questions after continuing pre-training of the WIT dataset. The images are from NoCaps and are taken from the OpenImages dataset under the CC BY 2.0 license.

To quantify SimVLM’s zero-shot performance, we take a pre-trained frozen model, decode it with the COCO caption and NoCaps benchmarks, and then compare it to the monitored baseline. Without supervised tweaks (middle row), SimVLM can reach zero-shot caption quality that is close to the quality of supervised methods.

Zero shot image caption result. “Pre” here. Indicates that the model has been pre-trained and is “Sup”. This means that the model has been fine-tuned with task-specific monitoring. For NoCaps [In, Near, Out] See inside the domain, close to the domain, and outside the domain, respectively. Compare the results of BUTD, AoANet, M2 Transformer, OSCAR, VinVL. Metrics used above (higher is better): BLEU-4 (B @ 4), METEOR (M), CIDEr (C), SPICE (S). For NoCaps, the CIDEr number is reported.

Conclusion We propose a simple and effective framework for VLPs. Unlike previous work with the object detection model and task-specific auxiliary losses, our model is end-to-end trained for the purpose of a single prefix language model. In various visual language benchmarks, this approach not only provides cutting-edge performance, but also demonstrates intriguing zero-shot behavior in multimodal comprehension tasks.

Acknowledgments Thanks to Jiahui Yu, Adams Yu, Zihang Dai, Yulia Tsvetkov, Hieu Pham, Chao Jia, Andrew Dai, Bowen Zhang, Zhifeng Chen, Ruoming Pang, Douglas Eck, Claire Cui and Yonghui Wu for creating the SimVLM paper. Supporting informative discussions, data preparation by Krishna Srinivasan, Samira Daruki, Nan Du, Aashi Jain, setup by Jonathan Shen, Colin Raffel, Sharan Narang, and the entire project. Other members of the Brain team who gave me.

Sources

1/ https://Google.com/

2/ http://ai.googleblog.com/2021/10/simvlm-simple-visual-language-model-pre.html

The mention sources can contact us to remove/changing this article

What Are The Main Benefits Of Comparing Car Insurance Quotes Online

LOS ANGELES, CA / ACCESSWIRE / June 24, 2020, / Compare-autoinsurance.Org has launched a new blog post that presents the main benefits of comparing multiple car insurance quotes. For more info and free online quotes, please visit https://compare-autoinsurance.Org/the-advantages-of-comparing-prices-with-car-insurance-quotes-online/ The modern society has numerous technological advantages. One important advantage is the speed at which information is sent and received. With the help of the internet, the shopping habits of many persons have drastically changed. The car insurance industry hasn't remained untouched by these changes. On the internet, drivers can compare insurance prices and find out which sellers have the best offers. View photos The advantages of comparing online car insurance quotes are the following: Online quotes can be obtained from anywhere and at any time. Unlike physical insurance agencies, websites don't have a specific schedule and they are available at any time. Drivers that have busy working schedules, can compare quotes from anywhere and at any time, even at midnight. Multiple choices. Almost all insurance providers, no matter if they are well-known brands or just local insurers, have an online presence. Online quotes will allow policyholders the chance to discover multiple insurance companies and check their prices. Drivers are no longer required to get quotes from just a few known insurance companies. Also, local and regional insurers can provide lower insurance rates for the same services. Accurate insurance estimates. Online quotes can only be accurate if the customers provide accurate and real info about their car models and driving history. Lying about past driving incidents can make the price estimates to be lower, but when dealing with an insurance company lying to them is useless. Usually, insurance companies will do research about a potential customer before granting him coverage. Online quotes can be sorted easily. Although drivers are recommended to not choose a policy just based on its price, drivers can easily sort quotes by insurance price. Using brokerage websites will allow drivers to get quotes from multiple insurers, thus making the comparison faster and easier. For additional info, money-saving tips, and free car insurance quotes, visit https://compare-autoinsurance.Org/ Compare-autoinsurance.Org is an online provider of life, home, health, and auto insurance quotes. This website is unique because it does not simply stick to one kind of insurance provider, but brings the clients the best deals from many different online insurance carriers. In this way, clients have access to offers from multiple carriers all in one place: this website. On this site, customers have access to quotes for insurance plans from various agencies, such as local or nationwide agencies, brand names insurance companies, etc. "Online quotes can easily help drivers obtain better car insurance deals. All they have to do is to complete an online form with accurate and real info, then compare prices", said Russell Rabichev, Marketing Director of Internet Marketing Company. CONTACT: Company Name: Internet Marketing CompanyPerson for contact Name: Gurgu CPhone Number: (818) 359-3898Email: [email protected]: https://compare-autoinsurance.Org/ SOURCE: Compare-autoinsurance.Org View source version on accesswire.Com:https://www.Accesswire.Com/595055/What-Are-The-Main-Benefits-Of-Comparing-Car-Insurance-Quotes-Online View photos

ExBUlletin

to request, modification Contact us at Here or [email protected]