我是靠谱客的博主 结实秋天,这篇文章主要介绍【DL经典回顾】Deep learning【DL经典回顾】Deep learning,现在分享给大家,希望可以做个参考。

【DL经典回顾】Deep learning

Authors: Yann LeCun,Yoshua Bengio & Geoffrey Hinton

Highlights

  • Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
  • Representation learning is a set of methods that allows a machine to be fed with raw data and to automatically discover the representations needed for detection or classification.
  • Deep-learning methods are representation-learning methods with multiple levels of representation, obtained by composing simple but non-linear modules that each transform the representation at one level (starting with the raw input) into a representation at a higher, slightly more abstract level.
  • For classification tasks, higher layers of representation amplify aspects of the input that are important for discrimination and suppress irrelevant variations.
  • Deep learning has turned out to be very good at discovering intricate structures in high-dimensional data.
  • To properly adjust the weight vector, the learning algorithm computes a gradient vector that, for each weight, indicates by what amount the error would increase or decrease if the weight were increased by a tiny amount. The weight vector is then adjusted in the opposite direction to the gradient vector.
  • The backpropagation procedure to compute the gradient of an objective function with respect to the weights of a multilayer stack of modules is nothing more than a practical application of the chain rule for derivatives.
  • It was made possible by the advent of fast graphics processing units (GPUs) that were convenient to program and allowed researchers to train networks 10 or 20 times faster.
  • For smaller data sets, unsupervised pre-training helps to prevent overfitting, significantly better generalization when the number of labelled examples is small, or in a transfer setting where we have lots of examples for some ‘source’ tasks but very few for some ‘target’ tasks. (Unsupervised pre-training: The objective in learning each layer of feature detectors was to be able to reconstruct or model the activities of feature detectors (or raw inputs) in the layer below.)
  • There are four key ideas behind ConvNets that take advantage of the properties of natural signals: local connections, shared weights, pooling and the use of many layers.
  • The convolutional and pooling layers in ConvNets are directly inspired by the classic notions of simple cells and complex cells in visual neuroscience.
  • A primitive 1D ConvNet called a time-delay neural net was used for the recognition of phonemes and simple words.
  • A number of companies such as NVIDIA, Mobileye, Intel, Qualcomm and Samsung are developing ConvNet chips to enable real-time vision applications in smartphones, cameras, robots and self-driving cars.
  • RNNs process an input sequence one element at a time, maintaining in their hidden units a ‘state vector’ that implicitly contains information about the history of all the past elements of the sequence.
  • RNNs have been found to be very good at predicting the next character in the text or the next word in a sequence, but they can also be used for more complex tasks.
  • Long short-term memory (LSTM) networks use special hidden units, the natural behaviour of which is to remember inputs for a long time.
  • Beyond simple memorization, neural Turing machines and memory networks are being used for tasks that would normally require reasoning and symbol manipulation.
  • Memory networks can be trained to keep track of the state of the world in a setting similar to a text adventure game and after reading a story, they can answer questions that require complex inference.
  • Although we have not focused on it in this Review, we expect unsupervised learning to become far more important in the longer term. Human and animal learning is largely unsupervised: we discover the structure of the world by observing it, not by being told the name of every object.

最后

以上就是结实秋天最近收集整理的关于【DL经典回顾】Deep learning【DL经典回顾】Deep learning的全部内容,更多相关【DL经典回顾】Deep内容请搜索靠谱客的其他文章。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(49)

评论列表共有 0 条评论

立即
投稿
返回
顶部