我是靠谱客的博主 鲜艳麦片,最近开发中收集的这篇文章主要介绍Ridge Regression,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

Around the middle of the 20th century the Russian theoretician Andre Tikhonov was working on the solution of ill-posed problems. These are mathematical problems for which no unique solution exists because, in effect, there is not enough information specified in the problem. It is necessary to supply extra information (or assumptions) and the mathematical technique Tikhonov developed for this is known as regularisation.

Tikhonov's work only became widely known in the West after the publication in 1977 of his book [29]. Meanwhile, two American statisticians, Arthur Hoerl and Robert Kennard, published a paper in 1970 [11] on ridge regression, a method for solving badly conditioned linear regression problems. Bad conditioning means numerical difficulties in performing the matrix inverse necessary to obtain the variance matrix. It is also a symptom of an ill-posed regression problem in Tikhonov's sense and Hoerl & Kennard's method was in fact a crude form of regularisation, known now as zero-order regularisation [25].

In the 1980's, when neural networks became popular, weight decay was one of a number of techniques `invented' to help prune unimportant network connections. However, it was soon recognised [8] that weight decay involves adding the same penalty term to the sum-squared-error as in ridge regression. Weight-decay and ridge regression are equivalent.

While it is admittedly crude, I like ridge regression because it is mathematically and computationally convenient and consequently other forms of regularisation are rather ignored here. If the reader is interested in higher-order regularisation I suggest looking at [25] for a general overview and [16] for a specific example (second-order regularisation in RBF networks).

We next describe ridge regression from the perspective of bias and variance and how it affects the equations for the optimal weight vector, the variance matrix and the projection matrix. A method to select a good value for the regularisation parameter, based on a re-estimation formula, is then presented. Next comes a generalisation of ridge regression which, if radial basis functions are used, can be justly called local ridge regression. It involves multiple regularisation parameters and we describe a method for their optimisation. Finally, we illustrate with a simple example.

转载于:https://www.cnblogs.com/ysjxw/archive/2008/05/21/1204117.html

最后

以上就是鲜艳麦片为你收集整理的Ridge Regression的全部内容,希望文章能够帮你解决Ridge Regression所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(44)

评论列表共有 0 条评论

立即
投稿
返回
顶部