Connect with us

Tech

An open source platform for finding the best ML model

Avatar

Published

on



Posted by: Hanna Mazzawi, Research Engineer, Xavi Gonzalvo, Research Scientist, Google Research

The success of neural networks (NNs) often depends on how well they can be generalized to different tasks. However, the research community’s understanding of how neural networks are generalized is currently somewhat limited, making it difficult to design an NN that can be generalized well. What does a suitable neural network look like for a particular problem? How deep should it be? What type of layer should I use? Is the LSTM sufficient, or is the Transformer layer better? Or is it a combination of the two? Does Ensemble or Distillation Improve Performance? These tricky questions are even more difficult when considering machine learning (ML) domains, where there may be better intuition and deeper understanding than other domains.

In recent years, the AutoML algorithm has emerged [e.g., 1, 2, 3] It helps researchers automatically find the right neural network without the need for manual experimentation. Techniques such as Neural Architecture Search (NAS) use algorithms such as reinforcement learning (RL), evolutionary algorithms, and combinatorial searches to build neural networks from a particular search space. With proper configuration, these techniques have demonstrated that they can provide better results than manually designed techniques. However, these algorithms are often computationally intensive and require thousands of models to train before they converge. In addition, they explore search spaces that are domain-specific and incorporate substantial prior human knowledge that is not successfully transferred between domains. As an example, in image classification, a traditional NAS searches for two good building blocks (convolution block and downsampling block) and creates a complete network according to traditional rules.

Open source for ModelSearch, a platform that helps researchers efficiently and automatically develop the best ML models to overcome these shortcomings and extend access to AutoML solutions to a wider research community. We are happy to announce the release. Instead of focusing on a particular domain, model search is domain-agnostic, flexible, and best suited for a particular dataset and problem, with minimal coding time, effort, and computational resources. Architecture can be found. It is built on Tensorflow and can run on a single machine or in a distributed configuration.

Overview The model search system consists of multiple trainers, search algorithms, transfer learning algorithms, and a database for storing various evaluated models. The system performs both adaptive but asynchronous training and evaluation experiments on different ML models (various architectures and training methods). Each trainer conducts the experiment independently, but all trainers share the knowledge gained from the experiment. At the beginning of every cycle, the search algorithm searches all completed trials and uses beam search to determine what to try next. Then call the mutation on one of the best architectures found so far and assign the resulting model to the trainer.

Schematic diagram of model search showing distributed search and ensemble. Each trainer runs independently to train and evaluate a particular model. The results are shared with the saved search algorithm. The search algorithm then calls the mutation in one of the best architectures and sends the new model back to the trainer for the next iteration. S is a set of training and validation examples, and A is all candidates used during training and searching.

The system builds a neural network model from a predefined set of blocks. Each block represents a known microarchitecture such as LSTM, ResNet, and Transformer layers. Model search can leverage the best existing knowledge gained from domain-wide NAS research by using blocks of existing architectural components. This approach is more efficient because it explores the structure rather than the more basic and detailed components, thus reducing the size of the search space.

A well-working neural network microarchitecture block (such as the ResNet block).

The model search framework is built on Tensorflow, so blocks can implement any function that takes a tensor as input. For example, suppose you want to introduce a new search space built with a choice of microarchitecture. The framework takes the newly defined blocks and incorporates them into the search process, allowing the algorithm to build the best possible neural network from the components provided. The blocks provided can also be a fully defined neural network that is already known to work for the problem of interest. In that case, the model search can be configured to act as a powerful ensemble machine.

The search algorithm implemented in the model search converges faster than the RL algorithm because it is adaptive, greedy, and incremental. However, it mimics the “search and leverage” nature of the RL algorithm by separating the search for the right candidates (search step) and increasing the accuracy by ensemble the good candidates found (search step). .. The main search algorithm adapts one of the top k running experiments (k is user-specified) after applying random changes to the architecture or training method (for example, deepening the architecture). To change.

An example of network evolution over many experiments. Each color represents a different type of architectural block. The final network is formed by mutations in high-performance candidate networks. In this case, the depth increases.

Transfer learning is enabled between various internal experiments to further improve efficiency and accuracy. Model search does this in two ways: distilling knowledge or sharing weights. Distillation of knowledge can improve the accuracy of candidates by adding a loss term that matches the predictions of high-performance models, in addition to ground truth. Weight sharing, on the other hand, copies some of the parameters in the network from previously trained candidates (mutates) by copying the appropriate weights from the previously trained model and randomly initializing the remaining models. Bootstrap (after applying). This allows for faster training and gives you the opportunity to discover more (and better) architectures.

Experimental result model search improves the production model with minimal iteration. In a recent treatise, we demonstrated the ability to search models in the speech domain by discovering models for keyword spotting and language identification. With less than 200 iterations, the resulting model uses only about 130K less trainable parameters (184K compared to 315K parameters) and only a few internal state-of-the-art production models designed by precision experts. It has improved to.

Model accuracy by iteration in the system compared to previous production models of keyword spotting. You can find similar graphs for language identification in the linked papers.

We also applied model search to find a suitable architecture for image classification on a thoroughly researched CIFAR-10 imaging dataset. Verify that you can quickly reach the benchmark accuracy of 91.83 using a set of known convolution blocks including convolutions, resnet blocks (ie two convolutions and skip connections), NAS-A cells, fully connected layers, and more Did. 209 trials (ie only 209 models investigated). By comparison, previous top performers reached the same threshold accuracy in 5807 trials with the NasNet algorithm (RL) and 1160 trials with PNAS (RL + progressive).

Conclusion We hope that the model search code will provide researchers with a flexible and domain-agnostic framework for ML model discovery. Building on previous knowledge of a particular domain, this framework provides state-of-the-art performance for well-studied problems when provided with a search space consisting of standard building blocks. We believe it is powerful enough to build a model.

Acknowledgments Special thanks to all code contributors to open source and treatises: Eugen Ehotaj, Scotty Yak, Malaika Handa, James Preiss, Pai Zhu, Aleks Kracun, Prashant Sridhar, Niranjan Subrahmanya, Ignacio Lopez Moreno, Hyun Jin Park, Patrick Violette.

What Are The Main Benefits Of Comparing Car Insurance Quotes Online

LOS ANGELES, CA / ACCESSWIRE / June 24, 2020, / Compare-autoinsurance.Org has launched a new blog post that presents the main benefits of comparing multiple car insurance quotes. For more info and free online quotes, please visit https://compare-autoinsurance.Org/the-advantages-of-comparing-prices-with-car-insurance-quotes-online/ The modern society has numerous technological advantages. One important advantage is the speed at which information is sent and received. With the help of the internet, the shopping habits of many persons have drastically changed. The car insurance industry hasn't remained untouched by these changes. On the internet, drivers can compare insurance prices and find out which sellers have the best offers. View photos The advantages of comparing online car insurance quotes are the following: Online quotes can be obtained from anywhere and at any time. Unlike physical insurance agencies, websites don't have a specific schedule and they are available at any time. Drivers that have busy working schedules, can compare quotes from anywhere and at any time, even at midnight. Multiple choices. Almost all insurance providers, no matter if they are well-known brands or just local insurers, have an online presence. Online quotes will allow policyholders the chance to discover multiple insurance companies and check their prices. Drivers are no longer required to get quotes from just a few known insurance companies. Also, local and regional insurers can provide lower insurance rates for the same services. Accurate insurance estimates. Online quotes can only be accurate if the customers provide accurate and real info about their car models and driving history. Lying about past driving incidents can make the price estimates to be lower, but when dealing with an insurance company lying to them is useless. Usually, insurance companies will do research about a potential customer before granting him coverage. Online quotes can be sorted easily. Although drivers are recommended to not choose a policy just based on its price, drivers can easily sort quotes by insurance price. Using brokerage websites will allow drivers to get quotes from multiple insurers, thus making the comparison faster and easier. For additional info, money-saving tips, and free car insurance quotes, visit https://compare-autoinsurance.Org/ Compare-autoinsurance.Org is an online provider of life, home, health, and auto insurance quotes. This website is unique because it does not simply stick to one kind of insurance provider, but brings the clients the best deals from many different online insurance carriers. In this way, clients have access to offers from multiple carriers all in one place: this website. On this site, customers have access to quotes for insurance plans from various agencies, such as local or nationwide agencies, brand names insurance companies, etc. "Online quotes can easily help drivers obtain better car insurance deals. All they have to do is to complete an online form with accurate and real info, then compare prices", said Russell Rabichev, Marketing Director of Internet Marketing Company. CONTACT: Company Name: Internet Marketing CompanyPerson for contact Name: Gurgu CPhone Number: (818) 359-3898Email: [email protected]: https://compare-autoinsurance.Org/ SOURCE: Compare-autoinsurance.Org View source version on accesswire.Com:https://www.Accesswire.Com/595055/What-Are-The-Main-Benefits-Of-Comparing-Car-Insurance-Quotes-Online View photos



picture credit

ExBUlletin

to request, modification Contact us at Here or [email protected]