Updated: Aug 23, 2021
Deep learning attempts to mimic the human brain through a combination of neural networks, artificial neural networks, data inputs, weights, and bias. These elements work together to accurately recognize, classify, and identify objects in the data.
Deep neural networks consist of multiple interconnected layers of nodes, each built on the previous layer to improve and optimize prediction or classification. This progression of calculations over the network is called forward propagation. The input and output layers of a deep neural network are called visible layers. The input layer is where the deep learning model receives data for processing, and the output layer is where the final prediction or classification is made.
Another process, called backscatter, uses algorithms such as gradient descent to calculate errors in estimates, and then adjusts the weights and deceptions of the function by moving backwards between layers in order to train the model. Together, forward propagation and backward propagation allow a neural network to make predictions and correct any errors accordingly. Over time, the algorithm becomes increasingly accurate.
The above describes the simplest type of deep neural network in the simplest terms. However, deep learning algorithms are incredibly complex, and there are different types of neural networks to address specific problems or data sets. For example,
Convoluted neural networks (CNNs), primarily used in computer vision and image classification applications, can enable tasks such as object detection or recognition by detecting features and patterns in an image. In 2015, a CNN beat a person in an object recognition challenge for the first time.
Repetitive neural networks (RNNs) are typically used in natural language and speech recognition applications, as they utilize sequential or time series data.