Monday, October 15, 2018

5G EXPLAINED in Q&As | What is 5G Technology ?

Networks are at the core of ICT business providing connectivity and services such as voice, messaging and broadband for customers. We are now embarking on the next generation mobile platform, 5G, which has been coined a General Purpose Technology. New and innovative technologies such as Internet of Things, Virtualization and Cloud, millimeter wave radio networks, massive MIMO etc. are all central in 5G. Check out the basic Q&A below to learn more about 5G technology, business opportunity for ICT service providers, as well as the IoT's many technology aspects.


5G Services

5G EXPLAINED


What is 5G? What is 5G Technology ? 5g vs 4g


5G is the next generation mobile network, introducing new radio and core network solutions. Leveraging much on the existing 4G network, but it will secure the further capacity for mobile broadband, supporting IoT and addressing new areas for industry and public services. New spectrum will allow for significantly increased capacity (& speed) and new architecture will allow for very low latency.

Who is 5G for? 


5G will, to a large extent, address today's mobile services, both data and voice, but 5G will also address new areas, serving the society services like e-health, public safety and different industry needs

What will change with 5G? 


The change with 5G is a much wider support for new services and areas for mobile connectivity, and a smart adaption for the different needs/requirements.

What’s the plan for 5G launch?


Leading ICT Service providers are going to run piloting activities around several use cases and locations to gain experience. As for 4G, ICT companies will start building 5G coverage in order to hold the best network position and start working to introduce 5G for new areas – such as e-health, public safety and industry.


What are some key beliefs when it comes to 5G?


  1. 5G is the most investment efficient way to meet the capacity growth
  2. 5G is a prerequisite for delivering new services and verticals in the future, e.g. emergency network services 5G will complement Fiber, and will support Cu decommissioning
  3. In order to fully utilize the commercial potential of new 5G capabilities, there is a need to be first.

Friday, June 22, 2018

Top 10 Publications for Foundations and Trends in Machine Learning

Machine Learning Trends 2018


We figure out how technologies and approaches to work in machine learning have changed in the last 5 years, using the example of the study of Andrej Karpathy.

The head of the machine learning department at Tesla, Andrej Karpathy, decided to find out how ML trends develop in recent years. To do this, he used the database of documents on machine learning for the past five years (about 28 thousand) and analyzed them. Andrew shared his findings with Medium

Features of the archive of documents


Let's first consider the distribution of the total number of downloaded documents for all categories (cs.AI, cs.LG, cs.CV, cs.CL, cs.NE, stat.ML) for a period of time. We get the following:

ML trends

It can be seen that in March 2017 almost 2,000 documents were downloaded. The peaks that appear on the graph are probably due to conference dates associated with machine learning (NIPS / ICML, for example).



The total number of papers will serve as a denominator. We can see which part of the documents contains interesting keywords.

Fundamentals of Deep Learning

First, we define the most commonly used frameworks in Deep Learning. To do this, we find papers that mention the frameworks in any place of work (even if it's a list of used literature).

For March 2017 the following picture is obtained:

ml trends 2019



Thus, 10% of all documents downloaded during this period contain references to TensorFlow. Of course, not every article will mention the environment used, but assuming that such references occur in a document with a certain fixed probability, it turns out that about 40% of members of the machine learning community use TensorFlow.

And here's a picture of how some of the most popular frameworks evolved over time:

ml trends 2019


You can see that the popularity of Theano has slowed. Caffe quickly took off in 2014, but lost in recent years on the popularity of TensorFlow. Torch and PyTorch are slowly but surely gaining popularity.For beginners its always good practice to get through Journal of Machine Learning Research.

Top 10 Publications for Foundations and Trends in Machine Learning 



machine learning trends 2019



1. Machine learning rules: best practices from Google developers
Top of the list of the best publications on machine learning is the publication of Google's developers, designed to help those who already have basic knowledge of machine learning, but there is not enough experience to understand the benefits of certain practices.



The idea of ​​the manual is similar to the similar Google guide to the C ++ style and other popular guidelines for practical programming. The manual consists of 43 clearly described rules-recommendations.

2. Lessons learned from the reproduction of the materials of the article on training with reinforcement

Guided by the recommendation that the detailed reproduction of the results of scientific publications on machine learning is one of the most effective ways to improve the quality of their skills, the author details in detail about such experience gained during the development of the project, which is devoted to reinforcement learning.


3. On the way to the virtual stuntman
The problems of controlling traffic dynamics have recently become part of the standard tasks of training with reinforcement. Methods of in-depth training have shown here high efficiency for a wide range of problems.

However, characters, whose movement pattern was found as a result of training with reinforcement, observe undesirable artifacts: trembling, asymmetric gait, excessive limb mobility. The publication discusses the possibilities of teaching models more natural behavior.

4. Annotated Transformer
The idea of ​​the Transformer architecture from the popular article " Attention is All You Need " last year has attracted the attention of many researchers in the field of computer linguistics.

In addition to improving the quality of translation, this approach provides a new architecture for many other natural language processing tasks. Although the source article is written in clear language, the very idea is rather difficult to implement correctly.

5. Differentiable plasticity: a new method of machine learning
In the middle of the selection of the best publications on machine learning for April 2018 was published the publication of the laboratory of artificial intelligence Uber about developments in the field of neural networks and an attempt to transfer the concept of plasticity of biological neural networks. The plasticity of real neurons lies in the ability to constantly interact between neurons throughout the entire existence of a neural network, which allows animals to adapt to changing conditions throughout their lives.

The article considers one of the possible approaches for such "learning" of artificial neural networks. Scientific publication of the laboratory Uber, which served as the source of the above post, can be read on arXiv .


6. Deep training to improve the quality of medical imaging
The difficulty in working with medical imaging archives is that, in their mass, they are represented by clinical assumptions. This means that when you want to extract an image (for example, a front x-ray of the chest), often instead you get a folder of many different images: with horizontal and vertical reflections, inverted pixel values, rotations at some angle.


7. Why do companies stop using RNN and LSTM?
Interest in recurrent neural networks and networks based on long short-term memory has increased dramatically in 2014, and over the next few years, these methods have become some of the best ways to solve problems of sequential learning and sequential (seq2seq) translation, which led to surprising results in improving recognition quality speech and the corresponding development of Siri, Cortana and Google's voice assistant, improving the quality of machine translation of documents, converting images into text, and so on.

But now, in 2018, the tools of successive models are no longer the best solutions, and more and more companies are moving to neural networks based on Attention based networks. The author explains the advantages of this approach and why many companies have moved away from the use of recurrent neural networks.

8. Keras and convolutional neural networks
This article presents the second publication of a three-part series on the construction of a complete complex classification of images based on in-depth training. The author, accompanying the story with examples of code, shows how to implement, train and evaluate the result of the convolutional neural network on its own data set. We recommend reading all three parts: the final one demonstrates how to deploy the pre-model Keras in a mobile application. For the sake of the fan as a task, the author realizes the childhood dream of creating the Pokedex - a device for recognizing the Pokémon.



9. How to implement the YOLO object detector (v3) from scratch in PyTorch
Detection of objects is an area that has greatly benefited from the latest developments in the field of in-depth training. As mentioned above, the best way to get acquainted with an algorithm, in particular an object detection algorithm, is to implement itself.

10. From viewing to listening: audio-visual separation of speech
Closes the top ten best machine learning publications for April 2018 post from Google's blog on artificial intelligence . It is well known that people even in a noisy environment know how to focus their attention on a particular person, mentally "drowning out" all other voices and sounds. However, the same task still represents a topic for machine learning. In the post we describe an audiovisual model that allows, in particular, to choose the person on the video in whose speech we want to focus, to distinguish their voices from the general noise.






https://www.digitaltechnologyreview.com/ site is a participant in the Amazon Services LLC Associates Program, and we get a commission on purchases made through our links.


Tuesday, May 29, 2018

In what way did Google Surpass Amazon ? Google vs Amazon 2018


Amazon Echo vs Google Home 


Periodically, new categories of devices appear on the market. Some of them become incredibly popular. An example of such devices is smartphones. Others are less interesting to a wider audience. However, sometimes it takes more than one year before new devices get distributed. And over time, the leaders of the new market segment are determined. Recently announced the next version of the mobile OS developed by her, Android P, Google managed to outperform Amazon in what Amazon was leading from the very beginning.

 Google vs Amazon 2018
Google Home



Deliveries of Google Home and Mini have surpassed deliveries of Amazon Echo

 Google vs Amazon


Amazon offered the first Amazon Echo back in 2015. As the first product of the new category, Echo had the largest market share of devices already in use and continued to lead the sales quarterly. So it was before the first quarter of this year. Smart speakers are the most dynamically developing category of technological products that combine a column and virtual personal assistant software. Over the three-month period, from January to March 2018, according to Canalys, 9 million devices of this category were delivered to the world market. Growth in comparison with the corresponding indicator of the past year amounted to an impressive 210%. But Google managed to outperform Amazon in terms of supply in this market segment, as reported in a note by Alan Friedman (Alan Friedman), a published resourcephonearena.com.


During the first quarter of this year, 3.2 million Google Home and Google Home Mini were delivered to the world market. And this outperforms Amazon's intelligent Echo column volume of 2.5 million devices. It should be further stressed that Google for the first time managed to outperform Amazon in terms of delivering smart columns throughout the quarter.

 Google vs Amazon 2019
Google Home Mini


Growth of supplies of Google Home and Mini - 483%


Even more impressive is the increase in the supply of smart columns Google, accounting for 483%. For comparison: a similar indicator of Amazon - 8%. During the period under review, 4.1 million smart speakers were delivered to the United States. In China, 1.8 million digital devices of this product category were delivered.

Why Apple HomePod is not among the most popular smart speakers?

Amazon Echo
Amazon Echo 2018



The third supplier of smart speakers was the Chinese vendor Alibaba with a score of 1.1 million devices. Apple also offers consumers its smart speakers - HomePod. And the indicators of its supply are not reflected in the number of leaders, but entered the category "Others". Other vendors, in addition to the four leading suppliers of smart speakers, were delivered in aggregate 1.56 million devices. However, Apple began offering its smart column only on February 9, 2018. Thus, its product was introduced on the market only during two incomplete months during the first quarter of this year.





https://www.digitaltechnologyreview.com/ site is a participant in the Amazon Services LLC Associates Program, and we get a commission on purchases made through our links.

Gyroscope Robot is on Sale now in Summer 2018


Gyrobot


A miniature and very charming robotic gyrocomputer Loomo from Segway has got its own page on Indiegogo and promises to go on sale this spring - the shipment of the first parties is planned for May 2018.

Loomo is not an ordinary gyroscope, it's a full-fledged robot working on an AI platform. You can ride it, but it's not his only function. He is able to respond to commands, he can recognize faces, silhouettes and knows the command "for me", so if you're tired of driving, you can go on foot, having commanded Loomo to keep up and follow you on your own. The software developers collaborated with the creators of the autopilot for BMW, so the robogiroscourter can also "park" on its own.

Gyroscope Robot


Gyroscope Robot 2018


The kid can ride at speeds up to 10 kilometers per hour and will be useful for performing various tasks. For example, you can get him to work as a flyer or a video player.

Gyrobot reviews

Like most "smart" things, Loomo has its own application, which allows using a smartphone to move a robot, setting a route for it, watching the world with its eyes, voice phrases entered into the application, monitor people, and still shoot photos and videos using a robot. In addition, developers report that they will release a separate SDK for Android, so everyone can independently provide the robot with new features and tricks, and at the same time and tighten programming skills.

For now Loomo can be bought for 1299 dollars - so much will pay for it "early birds", who managed to buy robots from the first batch. For everyone else, the price will start at $ 1,799.

Here is how to get raspberry pi 3 projects ideas 2018 






https://www.digitaltechnologyreview.com/ site is a participant in the Amazon Services LLC Associates Program, and we get a commission on purchases made through our links.

Machine Learning Future Trends and AI Doomsday Take Over


Machine Learning Introduction

What is machine learning ?


Machine learning - (Machine Learning) is an extensive subsection of artificial intelligence that studies methods of constructing algorithms that can be trained. There are two types of training: case studies, or inductive training, is based on the  identification of patterns in empirical data; deductive training involves formalizing the knowledge of experts and transferring them to the computer as a knowledge base. Deductive learning is usually referred to the field of expert systems, so the terms
machine learning and training by use of precedents can be considered synonymous. Many methods of inductive learning were developed as an alternative to classical statistical approaches.Robots Take Over Jobs entire techno think tanks are predicting.what is machine learning algorithm ?

Artificial Intelligence created levels for Doom no worse than humans


Can you provide a modern three-dimensional shooter with an infinite number of different levels? You can, if you train artificial intelligence to create them. This is what the researchers from the Polytechnic University of Milan have been doing. Their algorithms are trained on the well-known game Doom. 

How Frightened Should We Be of AI ?


machine learning introduction
Machine Learning 

Three-dimensional shooter Doom appeared 25 years ago thanks to the talented programmer John Carmack. He for a long time lingered on the drives of personal computers because of the efforts of John Romero and American McGee, who created the levels for the game. In addition, id Software released a level editor that allowed players to add a continuation to the game for free.

The continued popularity of the game and the huge number of levels created by real people made Doom ideal for training artificial intelligence. But we should pay tribute to the researchers from the University of Milan. They applied a very interesting approach to their task.

An adversarial network was created. Two algorithms have studied thousands of levels of Doom, created during the entire existence of the game. After that, one of them began to compose his own levels, and the second compared the levels created by people with levels created with the help of artificial intelligence. If the algorithm could not distinguish the level for the game that created another algorithm, from the levels created by people, such a level was considered suitable for the game.

AI Takeover Future of machine learning


Of course, now very few people play Doom, but this approach can be used for any modern game. It is important only to train well the artificial intelligence, and then people like Romero and McGee will no longer have work.

Sunday, May 27, 2018

The Present and Future Trends of Machine Learning on Devices

As you, of course, noticed, machine learning on devices is now developing more and more. Apple mentioned this about a hundred times during WWDC 2017. It's no surprise that developers want to add machine learning to their applications.

However, many of these learning models are used only to draw conclusions based on a limited set of knowledge. Despite the term "machine learning", no learning on the device occurs, knowledge is inside the model and does not improve with time.

Machine Learning Future Trends 2019


The reason for this is that learning the model requires a lot of processing power, and mobile phones are not yet capable of it. It is much easier to train models offline on a server farm, and all improvements to the model include in the application update.



SwagCycle Pro Folding Electric Bike, Pedal Free and App Enabled, 18 mph E Bike with USB Port to Charge on the Go




SwagCycle Pro Folding Electric Bike

It is worth noting that training on the device makes sense for some applications, and I believe that with time, such training of models will become as familiar as using models for forecasting. In this text I want to explore the possibilities of this technology.

Machine Learning Future Trends
Machine Learning Future Trend

Machine Learning Today Deep Learning in Neural Networks an overview

The most common application of deep and machine learning in applications is now the computer vision for analyzing photos and videos. But machine learning is not limited to images, it is also used for audio, language, time sequences and other types of data. The modern phone has many different sensors and a fast connection to the Internet, which leads to a lot of data available for the models.

iOS uses several models of in-depth training on devices: face recognition in the photo, the phrase " Hello, Siri " and handwritten Chinese characters . But all these models do not learn anything from the user.

Almost all machine learning APIs (MPSCNN, TensorFlow Lite, Caffe2) can make predictions based on user data, but you can not get these models to learn new from these data.

Now the training takes place on a server with a large number of GPUs. This is a slow process that requires a lot of data. A convolutional neural network, for example, is trained on thousands or millions of images. Training such a network from scratch will take several days on a powerful server, a few weeks on the computer and for ages on the mobile device.

Learning on the server is a good strategy if the model is updated irregularly, and each user uses the same model. The application receives a model update every time you update the application in the App Store or periodically load new settings from the cloud.

Now training large models on the device is impossible, but it will not always be so. These models should not be large. And most importantly: one model for everyone may not be the best solution.


Why do I need training on the device? Machine learning trends 2019

There are several advantages of learning on the device:


  • The application can learn from the data or behavior of the user.
  • The data will remain on the device.
  • Transferring any process to the device saves money.
  • The model will be trained and updated continuously.
  • This solution does not work for every situation, but there are applications for it. I think that its main advantage is the ability to customize the model to a specific user.




On iOS devices, this is already done by some applications:

The keyboard learns based on the texts that you type, and makes assumptions about the next word in the sentence. This model is trained specifically for you, not for other users. Since training takes place on the device, your messages are not sent to the cloud server .
The "Photos" application automatically organizes images into the "People" album. I'm not entirely sure how this works, but the program uses the Face Recognition API on the photo and places similar faces together. Perhaps this is simply uncontrolled clustering, but the learning should still occur, since the application allows you to correct its errors and is improved based on your feedback. Regardless of the type of algorithm, this application is a good example of customization of user experience based on their data.
Touch ID and Face ID learn based on your fingerprint or face. Face ID continues to learn over time, so if you grow a beard or start wearing glasses, it will still recognize your face.
Motion Detection. Apple Watch learns your habits, for example, changing the heartbeat during different activities. Again, I do not know how this works, but obviously training must occur.
Clarifai Mobile SDK allows users to create their own models for classifying images using photos of objects and their designations. Typically, the classification model requires thousands of images for training, but this SDK can learn only a few examples. The ability to create image classifiers from your own photos without being an expert in machine learning has many practical applications.
Some of these tasks are easier than others. Often "learning" is simply remembering the last action of the user. For many applications this is enough, and this does not require fancy machine learning algorithms.

The keyboard model is simple enough, and training can occur in real time. The "Photos" application learns more slowly and consumes a lot of energy, so training occurs when the device is on charge. Many practical applications of training on the device are between these two extremes.

Other existing examples include spam detection (your email client learns on the letters you define as spam), text correction (it learns your most common mistakes when typing and fixes them) and smart calendars, like Google Now , that study recognize your regular actions.AI and machine learning in 2018 2019 2020 are going to change alot.

How far can we go in Machine Learning ?


If the goal of learning on the device is to adapt the machine learning model to the needs or behavior of specific users, then what can we do about it?

Here's a funny example: a neural network turns the drawings into emoji. She asks you to draw some different shapes and learns the pattern to recognize them. This application is implemented on the Swift Playground, not the fastest platform. But even under such conditions, the neural network does not study for long - on the device it takes only a few seconds ( that's how this model works ).

If your model is not too complicated, like this two-layer neural network, you can already conduct training on the device.

Note: on iPhone X, developers have access to a 3D model of the user's face in low resolution. You can use this data to train a model that selects emoji or another action in the application based on the facial expressions of the users.

Here are a few other future opportunities:


  • Smart Reply is a model from Google that analyzes an incoming message or letter and offers a suitable answer. It is not yet trained on the device and recommends the same answers to all users, but (in theory) it can be trained on the user's texts, which will greatly improve the model.
  • Handwriting recognition, which will learn exactly on your handwriting. This is especially useful on the iPad Pro with Pencil. This is not a new feature, but if you have the same bad handwriting as mine, then the standard model will allow too many errors.
  • Recognition of speech, which will become more accurate and adjusted to your voice.
  • Tracking sleep / fitness applications. Before these applications will give you tips on how to improve your health, they need to know you. For security reasons, it's best to stay on the device.
  • Personalized models for dialogue. We still have to see the future of chat bots, but their advantage lies in the fact that the bot can adapt to you. When you talk to a chat-bot, your device will learn your speech and preferences and change the answers of the chat-bot to your personality and manner of communication (for example, Siri can learn to give fewer comments).
  • Improved advertising. No one likes advertising, but machine learning can make it less intrusive for users and more profitable for the advertiser. For example, an advertising SDK can learn how often you look and click on ads, and choose more suitable advertising for you. The application can train a local model that will only request advertisements that work for a particular user.
  • Recommendations are the widespread use of machine learning. The podcast player can be trained on the programs you listened to to give advice. Now applications are performing this operation in the cloud, but this can be done on the device.
  • For people with disabilities, applications can help navigate the space and better understand it. I do not understand this, but I can imagine that applications can help, for example, distinguish between different drugs using a camera.
  • These are just a few ideas. Since all people are different, machine learning models could adapt to our specific needs and desires. Training on the device allows you to create a unique model for a unique user.


Different scenarios for learning models


Before applying the model, you need to train it. Training should be continued to further improve the model.

There are several training options:

  • Lack of training on user data. Collect your own data or use publicly available data to create a single model. When you improve a model, you release an application update or simply load new settings into it. So do most of the existing applications with machine learning.
  • Centralized training. If your application or service already requires data from the user that is stored on your servers, and you have access to them, then you can conduct training based on this data on your server. User data can be used to train for a particular user or for all users. So do platforms like Facebook. This option raises questions related to privacy, security, scaling and many others. The question of privacy can be solved by the method of "selective privacy" of Apple, but it also has its consequences .
  • Collaborative training. This method moves training costs to the users themselves. Training takes place on the device, and each user teaches a small part of the model. Updates of the model are sent to other users, so that they can learn from your data, and you - on them. But this is still a single model, and all of them end up with the same parameters. The main advantage of such training is its decentralization . In theory, this is better for privacy, but, according to studies , this option may be worse.
  • Each user is trained in his own model. In this version, I am personally most interested. The model can be learned from scratch (as in the example with pictures and emoji) or it can be a trained model that is customized for your data. In any case, the model can be improved over time. For example, the keyboard starts with a model already taught in a specific language, but eventually learns to predict which sentence you want to write. The downside of this approach is that other users can not benefit from this. So this option works only for applications that use unique data.



How to Train on the Device to Learn?


It is worth remembering that training on user data is different from learning on a large amount of data. The initial model of the keyboard can be trained on a standard body of texts (for example, on all Wikipedia texts), but a text message or letter will be written in a language different from the typical Wikipedia article. And this style will differ from user to user. The model should provide for these types of variations.

The problem is also that our best methods of in-depth training are rather inefficient and rude. As I said, the training of the image classifier can take days or weeks. The learning process, stochastic gradient descent, passes through small stages. The data set can have a million images, each of which the neural network will scan about a hundred times.

Obviously, this method is not suitable for mobile devices. But often you do not need to train the model from scratch. Many people take an already trained model and then use transfer learning based on their data. But these small data sets still consist of thousands of images, and even so the learning is too slow.
living room chair 2018 trends

With our current training methods, the adjustment of models on the device is still far away. But not all is lost. Simple models can already be trained on the device. Classical machine learning models such as logistic regression, decision trees, or naive Bayesian classifiers can be quickly trained, especially when using second-order optimization techniques such as L-BFGS or a conjugate gradient. Even the basic recurrent neural network should be available for implementation.

For the keyboard, the online learning method can work. You can conduct a training session after a certain number of words typed by the user. The same applies to models that use an accelerometer and traffic information, where the data comes in the form of a constant stream of numbers. Since these models are trained on a small part of the data, each update must occur quickly. Therefore, if your model is small and you do not have so much data, then training will take seconds. But if your model is larger or you have a lot of data, then you need to be creative. If the model studies people's faces in your photo gallery, it has too much data to process, and you need to find a balance between the speed and accuracy of the algorithm.

Here are a few more problems that you will encounter when learning on the device:


  • Large models. For deep learning networks, current learning methods are too slow and require too much data. Many studies are now devoted to learning models on a small amount of data (for example, in one photo) and for a small number of steps. I am sure that any progress will lead to the spread of learning on the device.
  • Multiple devices. You probably use more than one device. The problem of transferring data and models between the user's devices remains to be solved. For example, the application "Photos" in iOS 10 does not transmit information about people's faces between devices, so it learns on all devices separately.
  • Application updates. If your application includes a trained model that adapts to user behavior and data, what happens when you update the model with the application?

Training on the device is still at the beginning of its development, but it seems to me that this technology will inevitably become important in the creation of applications.

Saturday, May 19, 2018

Current Trend in Artificial Intelligence | Machine Learning Future Trends

Artificial Intelligence 2018. Are you Ready for AI ? See Whats Happening in 2018 and 2019 



2045 is the year of the invention of a full-fledged artificial intelligence that imitates the human
An inexperienced Internet user will be surprised to learn that artificial intelligence (AI) is still the technology of the future: the concept of AI has become so firmly established in the information agenda. Almost every day the largest technology corporations report about the achievements of their own "intellects". "Intellect" can already process the photo, with it you can fight in the game (by the way, in Guo he already beat all the champions) or just chat in Twitter and ask through the voice interface to turn off the lights or turn on the music. But in fact, the developers have so far made progress only in what the National Council for Science and Technology (NSTC) defines as a "narrow" AI. "General" AI, which will become a full-fledged imitation of human thinking, has yet to be invented by mankind, and it is impossible to predict how long this process will take,



An advanced prototype robot child named David is programmed to show unconditional love. When his human family abandons him, David embarks on a dangerous quest to become a real boy.

Artificial Intelligence in Everyday Life | AI Revolution


The research process, and behind it the emerging market, is now moving forward primarily machine learning. This subsection of AI works through algorithms of artificial neural networks . Neural networks function according to the principle of the human brain, that is, they draw conclusions based on analysis of large data sets.For example, a group of researchers from the Massachusetts Institute of Technology (MIT) in December 2016 taught artificial intelligence to supplement static images with dynamic ones. To do this, scientists "fed" AI 2 million videos with a total duration of about a year and programmed the development for predicting a static image. So, when the "intellect" from MIT received a photo of the beach, it "enlivened" it with the help of propulsion of sea waves. And according to the image of the railway station, the AI ​​"directed" a short film (so far only 32-second timekeeping) about the departure of the train. Practical application of the technology is useful, for example, unmanned vehicles, which in the case of obstacles appearing on the road, one must be able to instantly make a decision about a sharp maneuver or continuation of traffic in order to avoid tragic consequences (today up to 90% of accidents occur due to the driver's fault).



Machine Learning Future Trends


Similarly, he trains his development of Google. The corporation, for example, (with the support of Oxford University) has taught AI to read lips more effectively than professional linguists. After studying the "intellect" 5 thousand hours of video, he was able to consider 47% of words from a random sample of 200 fragments, while a man - only 12%. Also the company in an entertaining form offered users to teach AI better to recognize images. To do this, in November 2016, a game experiment was launched, Quick, Draw !, under which AI must guess in 20 seconds what the person is drawing on the screen.

The principle of image recognition, many companies already use and for commercial purposes. So, the Russian startup Prisma has trained its neural network to process photos of users in the style of different artists. The output was a service, downloaded, according to the Techcrunch portal, more than 70 million times around the world and recognized by Apple as the "application of the year" for the iPhone.

By 2020, the AI ​​market will grow tens of times and reach $ 153 billion, analysts at Bank of America Merrill Lynch predict. According to their calculations, most of the market - more than $ 80 billion in monetary terms - will capture the developers of solutions for robotics . In addition to the mentioned use of AI in laying out drone routes , the technology will be needed, for example, to improve the concept of "smart houses" and develop the logistics of commercial drones (in December Amazon already made the first commercial delivery of a drone in Britain).

$153 billion of

such value will achieve market development of artificial intelligence by 2020


The most ambitious players of the technological industry are looking at the AI ​​market and ai business ideas. In the spring of 2016, the founder of Tesla and SpaceX Ilon Mask together with partners created a non-profit company OpenAI to develop "friendly" AI. Entrepreneurs set the task not only to save humanity from enslaving cars, but also to make technology accessible to all developers. OpenAI will receive $ 1 billion of investment. The company has already introduced its debut product - software for "training with reinforcement." This type of machine learning enables the object to develop, interacting independently with the environment. Technology allows AI, for example, to control robots or play board games. Sometimes this leads to incidents: Twitter-bot from Microsoft, which in May went to learn "with reinforcement" in the network of microblogging,

In the footsteps of OpenAI followed by other companies. In December 2016, Apple announced plans to lay out what the AI ​​had in the public domain. Earlier, Microsoft, IBM, Amazon, Google and Facebook announced the merger of efforts for the synergy of research "capacities". The world's largest social network is particularly interested in the rapid development of AI technologies to combat fake news in the tapes of user subscriptions. At the finish of the presidential campaign in the United States, false informational occasions became a serious problem for the service: from February to November, users "jumped" or "reposted" the 20 most popular "fakes" 8.7 million times, while the top 20 real news caused only 7.4 million reactions.

Algorithms of machine learning are used almost in all "fashion" today technological directions of research - from unmanned vehicles to smart home systems. AI technologies can potentially change any branch of the economy and almost any business. So, in the near future - 2017-2018, according to the forecast of the analytical company McKinsey, machine training will turn the recruitment market (artificial intelligence more accurately than professional "bounty hunters" will be able to search for optimal candidates for employers) and some segments of the IT market (for example, stormy the development of chat bots allows the business to build new communication strategies, including in social media).

In the future, AI should help states and businesses deal with cyberthreats. So, AI-Squared, a joint project of the Massachusetts Institute of Technology (MIT) and the PetternEx start-up, is a platform that processes large amounts of user data using a neural network algorithm to detect cyber attacks. Based on the results of a year and a half of platform tests (during this time it analyzed 3.6 billion files), its developers announced the detection and prevention of 85% of the attacks.

76 million

jobs can DESTROY THE ARTIFICIAL INTELLIGENCE IN THE NEXT 20 YEARS IN THE US

We have been hearing alot about following queries 

What could be the future robotic technology,artificial intelligence jobs salary
future robots 2050,jobs already replaced by robots ai taking jobs,future humanoid robots,what will robots be like in the future artificial intelligence and job loss,jobs lost to automation statistics
artificial intelligence job description,future robots in the home entry level artificial intelligence jobs,artificial intelligence jobs at google robots in the future facts artificial intelligence future applications and many more. 

Neural networks have long been used in cybersecurity technologies.


 For example, Kaspersky Lab, about a decade ago, created a special module capable of recognizing digital passwords in the form of pictures that malicious programs used to distribute in e-mail. Another example is image analysis technologies on websites for identification by means of parental controls or an adjacent technology for recognizing spam on an image. Neural networks are widely used to analyze network traffic: with their help, cybersecurity specialists are looking for anomalies that may indicate suspicious activity.

But apart from the positive consequences, the development of artificial intelligence carries risks as well. In the coming years, AI technologies will continue to attack users, the report of the analytical company ArkInvest says. According to experts, in the next twenty years, artificial intelligence can destroy only 76 million jobs in the US - ten times more than it was created during the years of Barack Obama's presidency. And in the perspective of thirty years AI can bring the global unemployment rate (with the current structure of the labor market) to 50%, The Guardian wrote, citing Professor Rice Moshe Vardi.
how does technology help students learn
technological changes examples
negative impact of technology on business
technology and work

how does technology affect business today are the questions now a days required workable solutions.



what is a digital solution

Thursday, May 17, 2018

Business Survival with AI BI in Digital Data Distribution

Distribution of information to users of BI 2018


Modern business dynamics and trends in data, pushed by emerging Internet technologies, led to a significant increase in the amount of data stored in corporate systems.

Many enterprises are faced with the problem of excess information as per big data predictions by researchers like forbes big data. Transaction processing systems generate daily reports. In hundreds of ways, using data warehouses and analytical tools, longitudinal and lateral slices of information (slice and dice) are performed. Tools for enterprise resource planning ( E Nterprise the R esource P Lanning, of ERP) collect all the data of the company. Tools for assessing performance and information received through the Internet further increase information congestion.

BI 2018



The situation is further complicated by the fact that modern enterprises operate on the world market, which means that their data and applications are geographically distributed. each branch has its own program and its own repository. Resources planning tools are implemented at all levels: applications for human resource management, financial systems and production-oriented software are created. The information used to work with these applications is stored in disparate sources (in different relational databases, as well as in special format files and in multidimensional samples).


BIG Data BI and AI


Today, many companies lose money and time, manually distributing information resources. According to some studies, about 15% of revenues are spent on the creation and distribution of paper documents on average. Thus, according to the Gardner Group, corporate employees spend 10% of their time looking for information necessary for a certain task or for making decisions, from 20% to 40% of the time - for processing already ready documents and 30% for tasks related with documents, but not adding value to the final service (product) of the company.

A new concept called "information broadcasting" is a technology that makes possible the transmission of personalized messages needed to support decision making via the Web, e-mail, fax, pager, mobile phone to hundreds of thousands of addressees.

Automating the process of sending information can provide users with timely data, expand the company's capabilities and prevent the accumulation of unnecessary data.

Moreover, the intellectual resources of the enterprise, created with the help of BI tools and analytical packages, should be automatically transferred also to the applications used in the operational departments to support decision making.

The most important factors for success | New inventions in science 2018


The task of correctly providing important business information using AI & business intelligence technologies to the appropriate consumer is clearly not an easy one. At the same time, there are a number of factors that determine its successful solution:

  • Automatic saving of information in various working formats (HTML, PDF, Excel, etc.), which will satisfy the needs of specific users and allow performing the analysis.
  • Distribution of significant material necessary for decision-making, including materials of "third parties" (drawings, Word documents, Power Point presentations, reports created both with the help of software purchased from various BI suppliers, and legacy reporting applications). Delivery of information via various channels, such as e-mail, Web, fax, network printer, etc .; transfer of information to mobile devices.
  • Intelligent information dissemination functions, such as scheduled distribution or distribution, performed under certain business conditions;
  • Providing only the necessary reports, which frees users from tedious viewing of all materials in the search for necessary sections.
  • An intuitive system for cataloging and storing information, especially important in connection with the adoption of many legislative measures concerning the storage and management of documents.
  • Support for any number of users, thanks to which the software for distribution of information will be able to scale successfully as the organization grows.



Examples of the effective use of applications that distribute information in this big data and ai trends world.

Electronic reporting


According to Gardner Group, on average, each document is copied 9-11 times. Mailing and delivery of hard copies is a rather laborious procedure, including sending faxes and emails, or manually delivering. This problem is almost completely eliminated with the use of electronic reports: not only paper consumption is reduced, but also other expenses, and, most importantly, the execution time of this work is significantly reduced.

The software intended for dissemination of information automatically sends data to those who are interested in them. This intelligent method of distribution further reduces the labor costs associated with manual processing of documents. The user no longer needs to search for information, it is automatically sent to his mailbox.

Scheduled reports


Applications for the delivery of information provide the execution of requests through e-mail. In accordance with some schedule (daily, weekly, etc.), certain information material is formed and sent to the subscriber. And now you do not need to make the same reports for hours. (For example, sales managers can assign a specific day in which they will receive a monthly report on their departments' work in their mailbox). To regulate the distribution plan, you can either only the administrator or several authorized users. The content, time, frequency, format and even method of dissemination of information can vary according to individual requirements.

Event-driven notifications


The administrator can schedule special notifications that will be sent to users if certain conditions are met. Such a dispatch will shorten the working time spent by the administrator on sending daily messages, as well as on the execution of reports by users and search for information.

The notifications work as follows:

  • The administrator sets the notification condition, during which certain information is sent.
  • Planned tasks stored in the repository are checked with a user-friendly frequency.
  • If a scheduled notification check is detected, then the application tests all conditions (business rules).
  • If the conditions are not met, the information is not sent, and the notification check is postponed for the next scheduled time.
  • If the result of the test is positive, then the corresponding information is generated and sent as a notification. The notification is activated according to the settings (automatically, permanently, once, with a delay).


Notifications serve to inform employees about important events and increase the efficiency of the company. For example, they can help:


  • leader, if the amount of the budget is exceeded;
  • Investor if the stock price falls below a certain level;
  • Trade representative, if a good deal is expected;
  • manager for working with corporate clients, if a large customer receives a complaint.


Splitting reports

Reporting automates the manual process of "paper pushing", as a result, it is possible to significantly reduce administrative costs and processing time. For example, we will present a report that contains information on spent and missed days for each employee. The pages of this report can be automatically broken down and sent electronically to each employee, while many and many hours of HR department work will be saved.

New Technologies for Information or Data Distribution 


In recent years, the main means of disseminating information is the Web, because when it is used:

  • Improves access to business information and increases the number of users who can work with it;
  • provides a convenient interface for searching and navigating;
  • it becomes possible to access different types of data located in different sources (both inside and outside the organization);
  • due to the thin client technology, the hardware and software requirements are reduced and the costs for supporting the client seat are reduced;
  • it is possible to use platform-independent software (Java) and a data representation system (XML);
  • Information for employees, partners and customers can be distributed through Intranet / Extranet and the Internet. Due to this, the space of access to information expands and new ways of doing business are opened.


As a result, the information needed to make decisions can be transferred to users and applications using two main technologies:

  • Web-based tools Business Intelligence (tools for creating reports, performing OLAP analysis and data mining), as well as packages of analytical applications for the Web;
  • corporate information portals and distribution servers.


It is important to note that these technologies are not interchangeable: for maximum effect they should be used together.

Web-oriented BI-tools and analytical software packages distribute information and results of analytical operations through standard graphical and Web-interfaces. Many of these products also support scheduled and event-driven delivery of information to Web servers and mail systems, thereby leveraging all the capabilities of the underlying network tools.

As the volume of business information processed in the decision-making system grows, it becomes more likely that data will be distributed within a series of information stores located on different servers. This is true, for example, in cases where organizations obtain business information from external sources, internal offices, shared software and Web servers.

To help users apply this wide range of business data (both for unregulated access and for offline delivery), the second technology is used - corporate information portals .

When publishing corporate information on the portal, metadata about the location of particular information is used, as well as methods of extracting them. When the employee makes the requested request, the portal reads the relevant metadata and, using the means of access and delivery, selects the necessary information.

Effective work is achieved only if the information is properly shared, otherwise employees duplicate each other's activities and do not achieve mutual understanding. Thus, the portal becomes the basis that guarantees free data sharing and solution to technology problems in business

Conclusion

Rapid development and high competition in the market do not forgive mistakes. In this environment, it is necessary to make decisions correctly and in a timely manner. However, the task of finding the information necessary for this, seemingly simple at first glance, causes a number of difficulties, since the data are accumulated and stored throughout the organization. To solve it, companies need to develop infrastructures that ensure the timely dissemination and sharing of information.

Emerging trends , Data analyst trends, business intelligence future trends are the queries frequently asked to us.The software for dissemination of information should ensure the transfer of data to a huge number of users inside and outside the enterprise and significantly improve the efficiency of work by:

  • rapid delivery of critical information;
  • minimizing the time of the work cycle as a result of stopping the dissemination of information manually, more rational processing of documents and improving the productivity of employees;
  • distribution of any content, including files of different formats (for example, reports from applications of other suppliers or old systems, drawings, diagrams, documents and presentations);
  • optimal storage of files and the realization of a simple and quick extraction of the necessary information.

Popular Posts