The Present and Future Trends of Machine Learning on Devices [Update for 2021]

As you, of course, noticed, machine learning on devices is now developing more and more. Apple mentioned this about a hundred times during WWDC 2017. It's no surprise that developers want to add machine learning to their applications.APIs IoT big data and AI are driving forces for Machine Learning in 2021.

However, many of these learning models are used only to draw conclusions based on a limited set of knowledge. Despite the term "machine learning", no learning on the device occurs, knowledge is inside the model and does not improve with time.

Machine Learning Future Trends

The reason for this is that learning the model requires a lot of processing power, and mobile phones are not yet capable of it. It is much easier to train models offline on a server farm, and all improvements to the model include in the application update.

It is worth noting that training on the device makes sense for some applications, and I believe that with time, such training of models will become as familiar as using models for forecasting. In this text I want to explore the possibilities of this technology.

Machine Learning Future Trends
Machine Learning Future Trend

Machine Learning Today Deep Learning in Neural Networks an overview

The most common application of deep and machine learning in applications is now the computer vision for analyzing photos and videos. But machine learning is not limited to images, it is also used for audio, language, time sequences and other types of data. The modern phone has many different sensors and a fast connection to the Internet, which leads to a lot of data available for the models.

iOS uses several models of in-depth training on devices: face recognition in the photo, the phrase " Hello, Siri " and handwritten Chinese characters . But all these models do not learn anything from the user.

Almost all machine learning APIs (MPSCNN, TensorFlow Lite, Caffe2) can make predictions based on user data, but you can not get these models to learn new from these data.

Now the training takes place on a server with a large number of GPUs. This is a slow process that requires a lot of data. A convolutional neural network, for example, is trained on thousands or millions of images. Training such a network from scratch will take several days on a powerful server, a few weeks on the computer and for ages on the mobile device.

Learning on the server is a good strategy if the model is updated irregularly, and each user uses the same model. The application receives a model update every time you update the application in the App Store or periodically load new settings from the cloud.

Now training large models on the device is impossible, but it will not always be so. These models should not be large. And most importantly: one model for everyone may not be the best solution.

Why do I need training on the device? 

There are several advantages of learning on the device:

  • The application can learn from the data or behavior of the user.
  • The data will remain on the device.
  • Transferring any process to the device saves money.
  • The model will be trained and updated continuously.
  • This solution does not work for every situation, but there are applications for it. I think that its main advantage is the ability to customize the model to a specific user.

On iOS devices, this is already done by some applications:

The keyboard learns based on the texts that you type, and makes assumptions about the next word in the sentence. This model is trained specifically for you, not for other users. Since training takes place on the device, your messages are not sent to the cloud server .
The "Photos" application automatically organizes images into the "People" album. I'm not entirely sure how this works, but the program uses the Face Recognition API on the photo and places similar faces together. Perhaps this is simply uncontrolled clustering, but the learning should still occur, since the application allows you to correct its errors and is improved based on your feedback. Regardless of the type of algorithm, this application is a good example of customization of user experience based on their data.
Touch ID and Face ID learn based on your fingerprint or face. Face ID continues to learn over time, so if you grow a beard or start wearing glasses, it will still recognize your face.
Motion Detection. Apple Watch learns your habits, for example, changing the heartbeat during different activities. Again, I do not know how this works, but obviously training must occur.
Clarifai Mobile SDK allows users to create their own models for classifying images using photos of objects and their designations. Typically, the classification model requires thousands of images for training, but this SDK can learn only a few examples. The ability to create image classifiers from your own photos without being an expert in machine learning has many practical applications.
Some of these tasks are easier than others. Often "learning" is simply remembering the last action of the user. For many applications this is enough, and this does not require fancy machine learning algorithms.

The keyboard model is simple enough, and training can occur in real time. The "Photos" application learns more slowly and consumes a lot of energy, so training occurs when the device is on charge. Many practical applications of training on the device are between these two extremes.

Other existing examples include spam detection (your email client learns on the letters you define as spam), text correction (it learns your most common mistakes when typing and fixes them) and smart calendars, like Google Now , that study recognize your regular actions.AI and machine learning in 2018 2019 2020 are going to change alot.

How far can we go in Machine Learning ?

If the goal of learning on the device is to adapt the machine learning model to the needs or behavior of specific users, then what can we do about it?

Here's a funny example: a neural network turns the drawings into emoji. She asks you to draw some different shapes and learns the pattern to recognize them. This application is implemented on the Swift Playground, not the fastest platform. But even under such conditions, the neural network does not study for long - on the device it takes only a few seconds ( that's how this model works ).

If your model is not too complicated, like this two-layer neural network, you can already conduct training on the device.

Note: on iPhone X, developers have access to a 3D model of the user's face in low resolution. You can use this data to train a model that selects emoji or another action in the application based on the facial expressions of the users.

Here are a few other future opportunities:

  • Smart Reply is a model from Google that analyzes an incoming message or letter and offers a suitable answer. It is not yet trained on the device and recommends the same answers to all users, but (in theory) it can be trained on the user's texts, which will greatly improve the model.
  • Handwriting recognition, which will learn exactly on your handwriting. This is especially useful on the iPad Pro with Pencil. This is not a new feature, but if you have the same bad handwriting as mine, then the standard model will allow too many errors.
  • Recognition of speech, which will become more accurate and adjusted to your voice.
  • Tracking sleep / fitness applications. Before these applications will give you tips on how to improve your health, they need to know you. For security reasons, it's best to stay on the device.
  • Personalized models for dialogue. We still have to see the future of chat bots, but their advantage lies in the fact that the bot can adapt to you. When you talk to a chat-bot, your device will learn your speech and preferences and change the answers of the chat-bot to your personality and manner of communication (for example, Siri can learn to give fewer comments).
  • Improved advertising. No one likes advertising, but machine learning can make it less intrusive for users and more profitable for the advertiser. For example, an advertising SDK can learn how often you look and click on ads, and choose more suitable advertising for you. The application can train a local model that will only request advertisements that work for a particular user.
  • Recommendations are the widespread use of machine learning. The podcast player can be trained on the programs you listened to to give advice. Now applications are performing this operation in the cloud, but this can be done on the device.
  • For people with disabilities, applications can help navigate the space and better understand it. I do not understand this, but I can imagine that applications can help, for example, distinguish between different drugs using a camera.
  • These are just a few ideas. Since all people are different, machine learning models could adapt to our specific needs and desires. Training on the device allows you to create a unique model for a unique user.

Different scenarios for learning models

Before applying the model, you need to train it. Training should be continued to further improve the model.

There are several training options:

  • Lack of training on user data. Collect your own data or use publicly available data to create a single model. When you improve a model, you release an application update or simply load new settings into it. So do most of the existing applications with machine learning.
  • Centralized training. If your application or service already requires data from the user that is stored on your servers, and you have access to them, then you can conduct training based on this data on your server. User data can be used to train for a particular user or for all users. So do platforms like Facebook. This option raises questions related to privacy, security, scaling and many others. The question of privacy can be solved by the method of "selective privacy" of Apple, but it also has its consequences .
  • Collaborative training. This method moves training costs to the users themselves. Training takes place on the device, and each user teaches a small part of the model. Updates of the model are sent to other users, so that they can learn from your data, and you - on them. But this is still a single model, and all of them end up with the same parameters. The main advantage of such training is its decentralization . In theory, this is better for privacy, but, according to studies , this option may be worse.
  • Each user is trained in his own model. In this version, I am personally most interested. The model can be learned from scratch (as in the example with pictures and emoji) or it can be a trained model that is customized for your data. In any case, the model can be improved over time. For example, the keyboard starts with a model already taught in a specific language, but eventually learns to predict which sentence you want to write. The downside of this approach is that other users can not benefit from this. So this option works only for applications that use unique data.

How to Train on the Device to Learn?

It is worth remembering that training on user data is different from learning on a large amount of data. The initial model of the keyboard can be trained on a standard body of texts (for example, on all Wikipedia texts), but a text message or letter will be written in a language different from the typical Wikipedia article. And this style will differ from user to user. The model should provide for these types of variations.

The problem is also that our best methods of in-depth training are rather inefficient and rude. As I said, the training of the image classifier can take days or weeks. The learning process, stochastic gradient descent, passes through small stages. The data set can have a million images, each of which the neural network will scan about a hundred times.

Obviously, this method is not suitable for mobile devices. But often you do not need to train the model from scratch. Many people take an already trained model and then use transfer learning based on their data. But these small data sets still consist of thousands of images, and even so the learning is too slow.

With our current training methods, the adjustment of models on the device is still far away. But not all is lost. Simple models can already be trained on the device. Classical machine learning models such as logistic regression, decision trees, or naive Bayesian classifiers can be quickly trained, especially when using second-order optimization techniques such as L-BFGS or a conjugate gradient. Even the basic recurrent neural network should be available for implementation.

For the keyboard, the online learning method can work. You can conduct a training session after a certain number of words typed by the user. The same applies to models that use an accelerometer and traffic information, where the data comes in the form of a constant stream of numbers. Since these models are trained on a small part of the data, each update must occur quickly. Therefore, if your model is small and you do not have so much data, then training will take seconds. But if your model is larger or you have a lot of data, then you need to be creative. If the model studies people's faces in your photo gallery, it has too much data to process, and you need to find a balance between the speed and accuracy of the algorithm.

Here are a few more problems that you will encounter when learning on the device:

  • Large models. For deep learning networks, current learning methods are too slow and require too much data. Many studies are now devoted to learning models on a small amount of data (for example, in one photo) and for a small number of steps. I am sure that any progress will lead to the spread of learning on the device.
  • Multiple devices. You probably use more than one device. The problem of transferring data and models between the user's devices remains to be solved. For example, the application "Photos" in iOS 10 does not transmit information about people's faces between devices, so it learns on all devices separately.
  • Application updates. If your application includes a trained model that adapts to user behavior and data, what happens when you update the model with the application?

Training on the device is still at the beginning of its development, but it seems to me that this technology will inevitably become important in the creation of applications.

For more details you must check AI for Good. It is a United Nations platform, centered around annual Global Summits, that fosters the dialogue on the beneficial use of Artificial Intelligence, by developing concrete projects

Begins May 4, 2020
Ends May 8, 2020

Popular Posts

10 Strategic Technology Trends for 2021

With an Artificial Arm, Rigid Prosthesis Time is Over

𝟮𝟬𝟮2 𝗧𝗲𝗰𝗵𝗥𝗲𝘃𝗶𝗲𝘄 𝗨𝗽𝗱𝗮𝘁𝗲𝘀 𝗼𝗻 𝟱𝗚 𝘁𝗼 𝗣𝗼𝘄𝗲𝗿 𝗧𝗵𝗲 𝗜𝗻𝘁𝗲𝗿𝗻𝗲𝘁 𝗢𝗳 𝗧𝗵𝗶𝗻𝗴𝘀 𝗜𝗢𝗧 | 𝗙𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝟱𝗚

𝟱𝗚 𝗡𝗲𝘁𝘄𝗼𝗿𝗸 𝗖𝘂𝗿𝗿𝗲𝗻𝘁 𝗦𝘁𝗮𝘁𝘂𝘀 𝗮𝗻𝗱 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁 𝗣𝗿𝗼𝘀𝗽𝗲𝗰𝘁𝘀 𝗳𝗼𝗿 2022 | 𝗙𝘂𝘁𝘂𝗿𝗲 𝗶𝘀 𝟲𝗚

How to Trade $VIBE | Trading Range & Signal

7 Types Of Artificial Neural Networks In Linguistic Data Processing

Gyroscope Robot is on Sale now in Summer 2019

In 2018, Magic Leap Promises to Present a Custom Version of the Mixed Reality Gadget

Business Survival with AI BI in Digital Data Distribution

US$5.5 trillion QR Code Payment Services , Forced Chinese Central Bank to make Payment Security CyberCrime Proof