Friday, June 22, 2018

Top 10 Publications for Foundations and Trends in Machine Learning

Machine Learning Trends 2018


We figure out how technologies and approaches to work in machine learning have changed in the last 5 years, using the example of the study of Andrej Karpathy.

The head of the machine learning department at Tesla, Andrej Karpathy, decided to find out how ML trends develop in recent years. To do this, he used the database of documents on machine learning for the past five years (about 28 thousand) and analyzed them. Andrew shared his findings with Medium

Features of the archive of documents


Let's first consider the distribution of the total number of downloaded documents for all categories (cs.AI, cs.LG, cs.CV, cs.CL, cs.NE, stat.ML) for a period of time. We get the following:

ML trends

It can be seen that in March 2017 almost 2,000 documents were downloaded. The peaks that appear on the graph are probably due to conference dates associated with machine learning (NIPS / ICML, for example).



The total number of papers will serve as a denominator. We can see which part of the documents contains interesting keywords.

Fundamentals of Deep Learning

First, we define the most commonly used frameworks in Deep Learning. To do this, we find papers that mention the frameworks in any place of work (even if it's a list of used literature).

For March 2017 the following picture is obtained:

ml trends 2019



Thus, 10% of all documents downloaded during this period contain references to TensorFlow. Of course, not every article will mention the environment used, but assuming that such references occur in a document with a certain fixed probability, it turns out that about 40% of members of the machine learning community use TensorFlow.

And here's a picture of how some of the most popular frameworks evolved over time:

ml trends 2019


You can see that the popularity of Theano has slowed. Caffe quickly took off in 2014, but lost in recent years on the popularity of TensorFlow. Torch and PyTorch are slowly but surely gaining popularity.For beginners its always good practice to get through Journal of Machine Learning Research.

Top 10 Publications for Foundations and Trends in Machine Learning 



machine learning trends 2019



1. Machine learning rules: best practices from Google developers
Top of the list of the best publications on machine learning is the publication of Google's developers, designed to help those who already have basic knowledge of machine learning, but there is not enough experience to understand the benefits of certain practices.



The idea of ​​the manual is similar to the similar Google guide to the C ++ style and other popular guidelines for practical programming. The manual consists of 43 clearly described rules-recommendations.

2. Lessons learned from the reproduction of the materials of the article on training with reinforcement

Guided by the recommendation that the detailed reproduction of the results of scientific publications on machine learning is one of the most effective ways to improve the quality of their skills, the author details in detail about such experience gained during the development of the project, which is devoted to reinforcement learning.


3. On the way to the virtual stuntman
The problems of controlling traffic dynamics have recently become part of the standard tasks of training with reinforcement. Methods of in-depth training have shown here high efficiency for a wide range of problems.

However, characters, whose movement pattern was found as a result of training with reinforcement, observe undesirable artifacts: trembling, asymmetric gait, excessive limb mobility. The publication discusses the possibilities of teaching models more natural behavior.

4. Annotated Transformer
The idea of ​​the Transformer architecture from the popular article " Attention is All You Need " last year has attracted the attention of many researchers in the field of computer linguistics.

In addition to improving the quality of translation, this approach provides a new architecture for many other natural language processing tasks. Although the source article is written in clear language, the very idea is rather difficult to implement correctly.

5. Differentiable plasticity: a new method of machine learning
In the middle of the selection of the best publications on machine learning for April 2018 was published the publication of the laboratory of artificial intelligence Uber about developments in the field of neural networks and an attempt to transfer the concept of plasticity of biological neural networks. The plasticity of real neurons lies in the ability to constantly interact between neurons throughout the entire existence of a neural network, which allows animals to adapt to changing conditions throughout their lives.

The article considers one of the possible approaches for such "learning" of artificial neural networks. Scientific publication of the laboratory Uber, which served as the source of the above post, can be read on arXiv .


6. Deep training to improve the quality of medical imaging
The difficulty in working with medical imaging archives is that, in their mass, they are represented by clinical assumptions. This means that when you want to extract an image (for example, a front x-ray of the chest), often instead you get a folder of many different images: with horizontal and vertical reflections, inverted pixel values, rotations at some angle.


7. Why do companies stop using RNN and LSTM?
Interest in recurrent neural networks and networks based on long short-term memory has increased dramatically in 2014, and over the next few years, these methods have become some of the best ways to solve problems of sequential learning and sequential (seq2seq) translation, which led to surprising results in improving recognition quality speech and the corresponding development of Siri, Cortana and Google's voice assistant, improving the quality of machine translation of documents, converting images into text, and so on.

But now, in 2018, the tools of successive models are no longer the best solutions, and more and more companies are moving to neural networks based on Attention based networks. The author explains the advantages of this approach and why many companies have moved away from the use of recurrent neural networks.

8. Keras and convolutional neural networks
This article presents the second publication of a three-part series on the construction of a complete complex classification of images based on in-depth training. The author, accompanying the story with examples of code, shows how to implement, train and evaluate the result of the convolutional neural network on its own data set. We recommend reading all three parts: the final one demonstrates how to deploy the pre-model Keras in a mobile application. For the sake of the fan as a task, the author realizes the childhood dream of creating the Pokedex - a device for recognizing the Pokémon.



9. How to implement the YOLO object detector (v3) from scratch in PyTorch
Detection of objects is an area that has greatly benefited from the latest developments in the field of in-depth training. As mentioned above, the best way to get acquainted with an algorithm, in particular an object detection algorithm, is to implement itself.

10. From viewing to listening: audio-visual separation of speech
Closes the top ten best machine learning publications for April 2018 post from Google's blog on artificial intelligence . It is well known that people even in a noisy environment know how to focus their attention on a particular person, mentally "drowning out" all other voices and sounds. However, the same task still represents a topic for machine learning. In the post we describe an audiovisual model that allows, in particular, to choose the person on the video in whose speech we want to focus, to distinguish their voices from the general noise.






https://www.digitaltechnologyreview.com/ site is a participant in the Amazon Services LLC Associates Program, and we get a commission on purchases made through our links.


Tuesday, May 29, 2018

In what way did Google Surpass Amazon ? Google vs Amazon 2018


Amazon Echo vs Google Home 


Periodically, new categories of devices appear on the market. Some of them become incredibly popular. An example of such devices is smartphones. Others are less interesting to a wider audience. However, sometimes it takes more than one year before new devices get distributed. And over time, the leaders of the new market segment are determined. Recently announced the next version of the mobile OS developed by her, Android P, Google managed to outperform Amazon in what Amazon was leading from the very beginning.

 Google vs Amazon 2018
Google Home



Deliveries of Google Home and Mini have surpassed deliveries of Amazon Echo

 Google vs Amazon


Amazon offered the first Amazon Echo back in 2015. As the first product of the new category, Echo had the largest market share of devices already in use and continued to lead the sales quarterly. So it was before the first quarter of this year. Smart speakers are the most dynamically developing category of technological products that combine a column and virtual personal assistant software. Over the three-month period, from January to March 2018, according to Canalys, 9 million devices of this category were delivered to the world market. Growth in comparison with the corresponding indicator of the past year amounted to an impressive 210%. But Google managed to outperform Amazon in terms of supply in this market segment, as reported in a note by Alan Friedman (Alan Friedman), a published resourcephonearena.com.


During the first quarter of this year, 3.2 million Google Home and Google Home Mini were delivered to the world market. And this outperforms Amazon's intelligent Echo column volume of 2.5 million devices. It should be further stressed that Google for the first time managed to outperform Amazon in terms of delivering smart columns throughout the quarter.

 Google vs Amazon 2019
Google Home Mini


Growth of supplies of Google Home and Mini - 483%


Even more impressive is the increase in the supply of smart columns Google, accounting for 483%. For comparison: a similar indicator of Amazon - 8%. During the period under review, 4.1 million smart speakers were delivered to the United States. In China, 1.8 million digital devices of this product category were delivered.

Why Apple HomePod is not among the most popular smart speakers?

Amazon Echo
Amazon Echo 2018



The third supplier of smart speakers was the Chinese vendor Alibaba with a score of 1.1 million devices. Apple also offers consumers its smart speakers - HomePod. And the indicators of its supply are not reflected in the number of leaders, but entered the category "Others". Other vendors, in addition to the four leading suppliers of smart speakers, were delivered in aggregate 1.56 million devices. However, Apple began offering its smart column only on February 9, 2018. Thus, its product was introduced on the market only during two incomplete months during the first quarter of this year.





https://www.digitaltechnologyreview.com/ site is a participant in the Amazon Services LLC Associates Program, and we get a commission on purchases made through our links.

Popular Posts