top of page

PROMISING DEVELOPMENTS IN DEEP LEARNING

In their paper, Bengio, Hinton and LeCun highlight recent developments in deep learning that have helped to make progress in some areas where deep learning is challenged. An example is Transformers, a neural network architecture that is at the heart of language models such as OpenAI's gpt-3 and Google's Meena. One of the benefits of Transformers is their ability to learn without the need for tagged data. Transducers can develop representations through unsupervised learning and then apply these representations to fill in gaps in missing sentences or to create coherent texts after receiving a warning.

More recently, researchers have shown that Transformers can also be applied to computer vision tasks. When combined with convoluted neural networks, transformers can predict the content of masked regions.


A more promising technique is comparative learning, which attempts to find vector representations of missing regions instead of estimating exact pixel values. This is an intriguing approach and seems much closer to what the human mind is doing. When we see an image such as the following, we may not be able to visualize a photo-realistic depiction of the missing parts, but our mind may find a high-level representation of what might happen in these masked areas (e.g. doors , windows, etc.). The effort to make neural networks less dependent on human-tagged data fits into the discussion of self-controlled learning, a concept LeCun is working on.


The article also cites “ System 2 deep learning, ”a term borrowed from Nobel Prize-winning psychologist Daniel Kahneman. System 2 describes the functions of the brain that require conscious thinking, which includes symbol manipulation, reasoning, multi-step planning, and solving complex mathematical problems. System 2 deep learning is still in its early stages, but become a reality if the distribution is non-generalization, Causal Inference, and symbol manipulation robust transfer learning, including neural networks can solve some basic problems.


The scientists also support the study of” neural networks that assign intrinsic frames of reference to objects and their parts and recognize objects using geometric relationships." This is a reference to " capsule networks," an area of research Hinton has focused on over the past few years. Capsule networks aim to upgrade neural networks from detecting features in images to detecting objects, their physical properties, and their hierarchical relationships with each other. Capsule networks can provide deep learning through “intuitive physics,” an ability that enables humans and animals to understand three-dimensional environments.

"There is still a long way to go in terms of our understanding of how to make neural networks really effective. We expect there will be radically new ideas, ' Hinton told ACM.



Related Posts

See All
bottom of page