What Should You Know About Neural Networks?

What Should You Know About Neural Networks?

AI Technology
November 14, 2018 by Leo Webb
21382
Dapps

Neural networks, which are more properly referred to as artificial neural networks, are computing systems that consist of highly interconnected simple processing elements. The structure of such systems resembles the way neurons connect to each other in the human brain. The artificial intelligence industry grows fast these days, and neural networks make it possible to perform tasks which involve the process called deep learning. Our brain consists of millions of neurons, and neural networks also have such basic units — perceptrons. Every perceptron works on simple signal processing and is connected to a large network of other units.

Neural networks can learn by analyzing numerous training examples. For example, machines have to analyze millions of handwritten digits to recognize them. Although such a task looks simple for humans, it’s important to understand that we recognize handwritten symbols only thanks to 140 million neurons in our visual cortex. Visual recognition becomes an extremely difficult task when it comes to programming machines. Such a program should take into account millions of exceptions and special cases. Analyzing training examples, a neural network can automatically adjust rules for recognizing symbols. Moreover, the more examples a neural network has, the more accurate the recognition process.

A Bit of History

Although neural networks are one of the latest developments in the world of computer technology, the idea was born in 1943. A mathematician Walter Pitts and a neurophysiologist Warren McCullough wrote their work entitled A Logical Calculus of the Ideas Immanent in Nervous Activity, which was then published in the Brain Theory journal. The researchers suggested that brain activity is based on activation of neurons which are its basic units. In 1952, they moved to MIT and created the first department of cognitive science.

During the 1950s, there were many discoveries in this field. For example, the Perceptron was created based on the principle of the compound eye of insects. In 1959, researchers from Stanford University created MADALINE. This abbreviation stands for Multiple Adaptive Linear Elements. Neural networks were no longer just a theoretical model but a real tool working on real problems. MADALINE was used in telephone systems to decrease echo and to improve the quality of sound. This technology is still in use.

However, such a surge of enthusiasm also faced serious challenges. In 1969, MIT  published a book Perceptrons: An Introduction to Computational Geometry, which addressed problems of neural network development and questioned the future of artificial intelligence. This book caused a negative impact on funding and interest in this area during the 1970s. However, engineers who believed in neural networks didn’t stop and developed a multi-layered network in 1975. In 1982, interest in this area started to grow again, after Professor John Hopfield from Princeton University introduced the first associative neural network. Now data could move in both directions.

Today, neural networks are more advanced than ever and they are used to solve a vast variety of tasks.

Main Trends in the Industry

  • Deep Reinforcement Learning
    DRL is a type of a neural network which learns by communicating and interacting with its environment. These networks are based on actions, observations, and rewards. This system was successfully used in gaming. For example, the AlphaGo program managed to defeat a Chinese Go master. DRL can be used in various business applications. Its main advantage is that it requires less training data than other systems, and it’s possible to train it using simulation.
  • Capsule Networks
    This is an emerging type of deep neural networks. It works with information similar to our brain because it can maintain hierarchical relationships. This is the difference between capsule networks and convolutional networks, which are incapable of considering spatial hierarchies between complex and simple objects. Therefore, the latter are more prone to errors and misclassification. Capsule networks are more accurate and show fewer errors. In addition, they need less data for training.
  • Lean and Augmented Learning
    One of the main challenges of deep learning and machine learning, in general, is the volume of data available for training. Developers tackle this issue in two ways: transferring training models from one task to another and synthesizing new data. Transferring of the training data is called Transfer Learning, while training without relevant examples or with only one example is called One-Shot Learning. These two approaches are Lean Data Learning methods. Simulations and interpolations that help synthesize data are usually referred to as Augmented Learning. These techniques can help developers solve more problems, including cases where there’s not enough historical data.
  • Convolutional Neural Networks
    This type of neural networks isn’t new — it was created based on the way our brain processes signals from the eyes. Many modern visual recognition systems use CNN for object detection, localization, and classification. Facebook, Google, and Amazon use these networks for image filtering. They are also used in robotics.
  • Supervised Model
    This is another form of learning which obtains functions from previous training data which has been already labeled. The system compares the labeled inputs with the labeled outputs and calculates an error. After this, a supervised algorithm learns the mapping between the output and the input. The main goal is to approximate the mapping function so that the system could accurately predict the output data when receiving the new input data.
  • The Use of Neural Networks

First, neural networks make computers much smarter. Computers have always been better at solving equations or searching for some information in large databases. However, a computer can hardly tell the difference between Renaissance art and a porn picture. It’s also hard for a machine to understand whether you’ve said “knight” or “night” because they lack understanding of the context. Neural networks enable machines to learn the nuances of the real world, analyzing images and speech.

Neural networks can analyze multiple inputs and are capable of character and image recognition. In turn, character recognition has many applications, for example, in fraud detection. Image recognition is used in social media to filter inappropriate content and to recognize faces of users. It also is used in healthcare to detect cancer and in agriculture to monitor crops and livestock.

Neural networks also offer countless opportunities for forecasting, which makes them useful in many industries that require quick decision making: stock and currency markets, monetary and economic policy, etc. Neural networks can predict stock prices analyzing massive amounts of data. While traditional forecasting systems fail to take into account non-linear relationships between underlying factors, neural networks can model unseen relationships with no restrictions on the input data.

The Most Popular Neural Networks Libraries

  • TensorFlow
    This is an open-source program which uses data flow diagrams for numerical computations. Its architecture makes it possible to run TensorFlow on any GPU or CPU, including mobile devices. This software uses Python programming language, which makes it easy to learn and to work with. It also supports graph abstraction. However, there are not enough pre-trained models, and the use of Python makes this solution somewhat slow.
  • Theano
    This is the main competitor to TensorFlow. This library is also written in Python and is capable of performing numerical operations with multi-dimensional arrays. It’s very efficient, as it uses a GPU for intensive computations. A disadvantage of this library is that you may need to use it along with other libraries if you want to obtain high-level abstractions.
  • Microsoft CNTK
    Computational Network ToolKit is another library created as a response to TensorFlow. It provides model descriptions and learning algorithms, improving the maintenance and modularization of computation networks. If there’s a need for many servers to perform operations, CNTK can use them simultaneously. It supports Python, Java, C++, and C# and enables distributed training.
  • Caffe
    This is a powerful framework written in C++. It’s a nice choice for deep learning research because Caffe is efficient and very fast. It allows you to build convolutional neural networks to classify images and is optimized for GPUs, which is a reason why it’s so fast. There are bindings for MATLAB and Python available. In addition, you can train models with no need to write code. However, this solution doesn’t work well with new architectures and is not the best choice for recurrent networks.
  • Keras
    This is a Python-based open-source library which works as an interface and is easy to configure regardless of the used framework, as it provides a high-level abstraction. Keras wasn’t created as an end-to-end machine learning solution, unlike the previous libraries. TensorFlow supports Keras as a backend, and CNTK team is going to add support of Keras in the nearest future.

Where to Learn About Neural Networks?

Here are some free online courses that can help you get a better understanding of neural networks, machine learning, and deep learning.

  • Neural Networks and Deep Learning (Coursera)
    This course will teach you the main principles of deep learning. You will also learn how to build and train deep neural networks and get an understanding of a neural network’s architecture. The course includes information on the latest trends in the industry and is aimed to prepare you for a job in AI.
  • Machine Learning by Google (Udacity)
    This course is intended for people who have already had some experience working with machine learning and are familiar with supervised learning. It’s focused on self-teaching systems that learn from massive datasets. This course will be especially useful for those who want to work as data scientists, data analysts, and learning engineers.
  • Learn with Google AI
    This resource was launched by Google to familiarize the general public with AI and machine learning. It contains a crash course on TensorFlow. Even if you don’t have any prior knowledge of neural networks, here you can learn the basics of the technology and also learn how to build and train neural networks.
  • Machine Learning by Stanford (Coursera)
    The course is taught by the founder of Google Brain and head of AI for Baidu, Andrew Ng. This course covers a variety of topics, including speech recognition, back propagation methods of learning, linear regression, etc. It also includes a tutorial on MATLAB, which is a very popular language for AI tools based on probability.
  • Machine Learning by Columbia University (edX)
    This course is focused on methods, applications, and models which are used to solve real-world problems. You will learn about supervised and unsupervised learning. You will also learn the difference between probabilistic and non-probabilistic methods. This course offers a lot of information, so get ready to devote at least 8-10 hours a week to studying and exercising

Conclusions

Neural networks were created in the 1950s, and since then, they’ve become a powerful tool that allows machines to cope with various intellectual tasks. Not so long ago, computers were incapable of recognizing handwritten text. Today, neural networks not only can determine whether your signature is real or fake but they also can recognize objects on pictures and even paint their own pictures. Machine learning and deep learning, in particular, are areas that grow and evolve faster than ever, allowing us to use “artificial brains” in various industries, from entertainment to security, healthcare, and finance.