我是靠谱客的博主 丰富宝贝,最近开发中收集的这篇文章主要介绍线性回归(多重特征)1 多重特征Multiple Features2 梯度下降Gradient descent in practice,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

文章目录

  • 1 多重特征Multiple Features
    • 1.1 向量化Vectorization
    • 1.2 代码与效率对比
      • 1.2.1 向量的创建Vector Creation
      • 1.2.2 向量的操作Operations on Vectors
        • 1.2.2.1 如何索引Indexing
        • 1.2.2.2 切片Slicing
        • 1.2.2.3 对整个向量操作
        • 1.2.2.4 向量之间操作
        • 1.2.2.5 标量与向量的操作
        • 1.2.2.6 向量点积***
        • 1.2.2.7 向量的shape
      • 1.2.3 矩阵Matrix的代码表示
        • 1.2.3.1 创建矩阵Matrix creation
        • 1.2.3.2 矩阵索引
        • 1.2.3.2 矩阵切片
    • 1.3 代码:多特征线性回归Multiple Variable Linear Regression
      • 1.3.1 房价模型
      • 1.3.2 线性回归模型
      • 1.3.3 计算cost
      • 1.3.4 计算梯度Compute gradient
      • 1.3.5 梯度下降Gradient Descent
  • 2 梯度下降Gradient descent in practice
    • 2.1 特征缩放与学习率 Feature scaling and Learning Rate
      • 2.1.1 学习率 Learning Rate
      • 2.1.2 特征缩放Feature scaling
    • 2.2 特征工程与多项式回归 Feature Engineering and Polynomial Regression
      • 2.2.1 多项式回归 Polynomial Regression
      • 2.2.2 特征工程Feature Engineering
      • 2.2.3 特征缩放 Scaling features
    • 2.3 Linear Regression using Scikit-Learn
    • 2.4 在线练习(常练常新)


在这里插入图片描述

1 多重特征Multiple Features

房价不只size一个特征,还可能有卧室数,地板数,房子年龄.例如

x^(i) = [size, bedrooms, floors, age of home]

可以再加一个下标j表示第几个元素.现在
Subscript[x, j]^(i)表示第i个样例的第j个特征的值.
现在多个特征的线性回归模型可写为
在这里插入图片描述
这里w也是向量,b标量bias.

1.1 向量化Vectorization

我们需要在代码简洁表示向量点乘,借助NumPy
我们可以把特征feature向量权重weight向量写为
在这里插入图片描述
现在要怎么表达w点积x呢?

在这里插入图片描述


在这里插入图片描述
不好,因为代码运行更慢.
直接用np.dao(w,x)
在这里插入图片描述

1.2 代码与效率对比

Vectorization后在电脑中计算时并行计算点乘梯度下降,时间复杂度O(1),比没有VectorizationO(n)更快.

  • 导入numpytime(看花费的时间)
import numpy as np    
import time

1.2.1 向量的创建Vector Creation

# NumPy routines which allocate memory and fill arrays with value
a = np.zeros(4);                print(f"np.zeros(4) :   a = {
     a}, a shape = {
     a.shape}, a data type = {
     a.dtype}")
a = np.zeros((4,));             print(f"np.zeros(4,) :  a = {
     a}, a shape = {
     a.shape}, a data type = {
     a.dtype}")
a = np.random.random_sample(4); print(f"np.random.random_sample(4): a = {
     a}, a shape = {
     a.shape}, a data type = {
     a.dtype}")


np.zeros(4) :   a = [0. 0. 0. 0.], a shape = (4,), a data type = float64
np.zeros(4,) :  a = [0. 0. 0. 0.], a shape = (4,), a data type = float64
np.random.random_sample(4): a = [0.43105022 0.79710395 0.16488279 0.73185609], a shape = (4,), a data type = float64

zeros元素是,random随机

# NumPy routines which allocate memory and fill arrays with value but do not accept shape as input argument
a = np.arange(4.);              print(f"np.arange(4.):     a = {
     a}, a shape = {
     a.shape}, a data type = {
     a.dtype}")
a = np.random.rand(4);          print(f"np.random.rand(4): a = {
     a}, a shape = {
     a.shape}, a data type = {
     a.dtype}")

np.arange(4.):     a = [0. 1. 2. 3.], a shape = (4,), a data type = float64
np.random.rand(4): a = [0.95584264 0.41866986 0.19089539 0.32726125], a shape = (4,), a data type = float64

arange递增.

# NumPy routines which allocate memory and fill with user specified values
a = np.array([5,4,3,2]);  print(f"np.array([5,4,3,2]):  a = {
     a},     a shape = {
     a.shape}, a data type = {
     a.dtype}")
a = np.array([5.,4,3,2]); print(f"np.array([5.,4,3,2]): a = {
     a}, a shape = {
     a.shape}, a data type = {
     a.dtype}")

np.array([5,4,3,2]):  a = [5 4 3 2],     a shape = (4,), a data type = int64
np.array([5.,4,3,2]): a = [5. 4. 3. 2.], a shape = (4,), a data type = float64

直接指定.

1.2.2 向量的操作Operations on Vectors

1.2.2.1 如何索引Indexing

C语言一样,索引是从零开始

#vector indexing operations on 1-D vectors
a = np.arange(10)
print(a)

#access an element
print(f"a[2].shape: {
     a[2].shape} a[2]  = {
     a[2]}, Accessing an element returns a scalar")

# access the last element, negative indexes count from the end
print(f"a[-1] = {
     a[-1]}")

#indexs must be within the range of the vector or they will produce and error
try:
    c = a[10]
except Exception as e:
    print("The error message you'll see is:")
    print(e)

[0 1 2 3 4 5 6 7 8 9]
a[2].shape: () a[2]  = 2, Accessing an element returns a scalar
a[-1] = 9
The error message you'll see is:
index 10 is out of bounds for axis 0 with size 10
1.2.2.2 切片Slicing

切片使用一组三个值 (start:stop:step) 创建一个索引数组。
各种切片:

#vector slicing operations
a = np.arange(10)
print(f"a         = {
     a}")

#access 5 consecutive elements (start:stop:step)
c = a[2:7:1];     print("a[2:7:1] = ", c)

# access 3 elements separated by two 
c = a[2:7:2];     print("a[2:7:2] = ", c)

# access all elements index 3 and above
c = a[3:];        print("a[3:]    = ", c)

# access all elements below index 3
c = a[:3];        print("a[:3]    = ", c)

# access all elements
c = a[:];         print("a[:]     = ", c)

a         = [0 1 2 3 4 5 6 7 8 9]
a[2:7:1] =  [2 3 4 5 6]
a[2:7:2] =  [2 4 6]
a[3:]    =  [3 4 5 6 7 8 9]
a[:3]    =  [0 1 2]
a[:]     =  [0 1 2 3 4 5 6 7 8 9]
1.2.2.3 对整个向量操作
a = np.array([1,2,3,4])
print(f"a             : {
     a}")
# negate elements of a
b = -a 
print(f"b = -a        : {
     b}")

# sum all elements of a, returns a scalar
b = np.sum(a) 
print(f"b = np.sum(a) : {
     b}")

b = np.mean(a)
print(f"b = np.mean(a): {
     b}")

b = a**2
print(f"b = a**2      : {
     b}")

a             : [1 2 3 4]
b = -a        : [-1 -2 -3 -4]
b = np.sum(a) : 10
b = np.mean(a): 2.5
b = a**2      : [ 1  4  9 16]
1.2.2.4 向量之间操作

向量加法

a = np.array([ 1, 2, 3, 4])
b = np.array([-1,-2, 3, 4])
print(f"Binary operators work element wise: {
     a + b}")

Binary operators work element wise: [0 0 6 8]

加法需要向量一样长

#try a mismatched vector operation
c = np.array([1, 2])
try:
    d = a + c
except Exception as e:
    print("The error message you'll see is:")
    print(e)
    
The error message you'll see is:
operands could not be broadcast together with shapes (4,) (2,)
1.2.2.5 标量与向量的操作

乘法

a = np.array([1, 2, 3, 4])

# multiply a by a scalar
b = 5 * a 
print(f"b = 5 * a : {
     b}")

b = 5 * a : [ 5 10 15 20]
1.2.2.6 向量点积***

虽然可以写函数用循环实现
定义

def my_dot(a, b): 
    """
   Compute the dot product of two vectors
 
    Args:
      a (ndarray (n,)):  input vector 
      b (ndarray (n,)):  input vector with same dimension as a
    
    Returns:
      x (scalar): 
    """
    x=0
    for i in range(a.shape[0]):
        x = x + a[i] * b[i]
    return x

实现

# test 1-D
a = np.array([1, 2, 3, 4])
b = np.array([-1, 4, 3, 2])
print(f"my_dot(a, b) = {
     my_dot(a, b)}")

my_dot(a, b) = 24

但这个太慢,我们用

# test 1-D
a = np.array([1, 2, 3, 4])
b = np.array([-1, 4, 3, 2])
c = np.dot(a, b)
print(f"NumPy 1-D np.dot(a, b) = {
     c}, np.dot(a, b).shape = {
     c.shape} ") 
c = np.dot(b, a)
print(f"NumPy 1-D np.dot(b, a) = {
     c}, np.dot(a, b).shape = {
     c.shape} ")

NumPy 1-D np.dot(a, b) = 24, np.dot(a, b).shape = () 
NumPy 1-D np.dot(b, a) = 24, np.dot(a, b).shape = () 

np.dot(a, b).shape = () 的括号为空表示是标量

循环更慢的代码证明,对长度一千万的向量:

np.random.seed(1)
a = np.random.rand(10000000)  # very large arrays
b = np.random.rand(10000000)

tic = time.time()  # capture start time
c = np.dot(a, b)
toc = time.time()  # capture end time

print(f"np.dot(a, b) =  {
     c:.4f}")
print(f"Vectorized version duration: {
     1000*(toc-tic):.4f} ms ")

tic = time.time()  # capture start time
c = my_dot(a,b)
toc = time.time()  # capture end time

print(f"my_dot(a, b) =  {
     c:.4f}")
print(f"loop version duration: {
     1000*(toc-tic):.4f} ms ")

del(a);del(b)  #remove these big arrays from memory

np.dot(a, b) =  2501072.5817
Vectorized version duration: 195.9784 ms 
my_dot(a, b) =  2501072.5817
loop version duration: 9432.0059 ms 

np.dot用时0.2秒,循环用时9秒!,不同电脑不一样,用时和硬件有关,

np.dot复杂度O(1),循环复杂度O(n)在这里插入图片描述
梯度同理
在这里插入图片描述

  • NumPy 更好地利用了底层硬件中可用的数据并行性。 GPU 和现代 CPU 实现了单指令多数据 (SIMD) 管道,允许并行发出多个操作。 这在数据集通常非常大的机器学习中至关重要。
1.2.2.7 向量的shape

shape()中有几个数就是几维张量,0个是标量,1个是向量,2个是矩阵,n个是n维张量.数的值则代表长度.

# show common Course 1 example
X = np.array([[1],[2],[3]

最后

以上就是丰富宝贝为你收集整理的线性回归(多重特征)1 多重特征Multiple Features2 梯度下降Gradient descent in practice的全部内容,希望文章能够帮你解决线性回归(多重特征)1 多重特征Multiple Features2 梯度下降Gradient descent in practice所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(43)

评论列表共有 0 条评论

立即
投稿
返回
顶部