我是靠谱客的博主 舒服小蘑菇,最近开发中收集的这篇文章主要介绍How to minimize the cost function of linear regression by gradient descentThe description of the problemThe codes for above problemThe corresponding results,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

The description of the problem

Next, you will implement gradient descent in the file gradientDescent.m. 
The loop structure has been written for you,
 and you only need to supply the updates to  within each iteration.

As you program, make sure you understand what you are trying to optimize and what is being updated. Keep in mind that the cost  is parameterized by the vector , not  and . 
That is, we minimize the value of  by changing the values of the vector , not by changing  or . 
Refer to the equations given earlier and to the video lectures if you are uncertain.
A good way to verify that gradient descent is working correctly is to look at the value of J and check that it is decreasing with each step. The starter code for gradientDescent.m calls computeCost on every iteration and prints the cost. 
Assuming you have implemented gradient descent and computeCost correctly, your value of  should never increase, and should converge to a steady value by the end of the algorithm.

The codes for above problem

from unicodedata import name
from venv import main
import numpy as np

def compute_cost(X, y, theta):
    """
    Computes the cost of using theta as the
    parameter for linear regression to fit the data points in X and y
    """
    m = len(y)
    J = np.sum((np.dot(X, theta) - y)**2) / (2 * m)
    return J

def gradient_descent(X, y, theta, alpha, num_iters):
    """
    Performs gradient descent to learn theta
    by taking num_items gradient steps with learning
    rate alpha
    """
    m = len(y)
    J_history = np.zeros(num_iters)
    for i in range(num_iters):
        theta = theta - alpha * (1/m) * np.dot(X.T, np.dot(X, theta) - y)
        J_history[i] = compute_cost(X, y, theta)
    return theta, J_history
def normalize_features(X):
    """
    Normalizes the features in X so that each feature
    has mean 0 and standard deviation 1.
    """
    mu = np.mean(X, axis=0)
    sigma = np.std(X, axis=0)
    return (X - mu) / sigma

def main():
    """
    Main function
    """
    #  Reads the testex1-ex8-matlabex1ex1data1.txt file
    data = np.loadtxt('testex1-ex8-matlabex1ex1data1.txt', delimiter=',')
    #  Selects the first feature vector from the numpy array
    X = data[:, 0]
    #  Selects the second feature vector from the numpy array
    y = data[:, 1]
    #  Normalizes the feature vectors
    X = normalize_features(X)
    #  Adds a column of ones to X (interception data)
    X = np.c_[np.ones(len(X)), X]
    #  Initializes theta and J_history
    theta = np.zeros(X.shape[1])
    J_history = np.zeros(1500)
    #  Computes and prints initial cost
    print('Cost at initial theta (zeros): %f' % compute_cost(X, y, theta))
    #  Performs gradient descent to learn theta
    theta, J_history = gradient_descent(X, y, theta, alpha=0.01, num_iters=1500)
    #  Computes and prints final cost
    print('Cost at theta found by gradient descent: %f' % compute_cost(X, y, theta))
    #  Prints theta to screen
    print('theta: ', theta)

if __name__ == '__main__':
    main()


The corresponding results

Cost at initial theta (zeros): 32.072734
Cost at theta found by gradient descent: 4.476971
theta:  [5.8391334  4.59303983]

最后

以上就是舒服小蘑菇为你收集整理的How to minimize the cost function of linear regression by gradient descentThe description of the problemThe codes for above problemThe corresponding results的全部内容,希望文章能够帮你解决How to minimize the cost function of linear regression by gradient descentThe description of the problemThe codes for above problemThe corresponding results所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(39)

评论列表共有 0 条评论

立即
投稿
返回
顶部