我是靠谱客的博主 笨笨啤酒,最近开发中收集的这篇文章主要介绍有约束条件的加权最小二乘的实现加权最小二乘原理Python实现,觉得挺不错的,现在分享给大家,希望可以做个参考。

概述

@[TOC] 有约束条件的加权最小二乘的实现

加权最小二乘原理

详细见网址:加权最小二乘法

Python实现

import numpy as np
import pandas as pd
from scipy.optimize import minimize
class Constrained_regression:
def __init__(self, weight = None, intercept = True):
"""
默认选择最小二乘
"""
self.weight = weight
self.intercept = intercept
def weight_error(self, B, X_train, y_train):
# 加权最小二乘的损失函数
# 也是目标函数
# B表示beta,为待优化参数
data1 = np.dot(X_train, B)
data2 = y_train - data1
if self.weight is None: # 此时为普通最小二乘
self.weight = np.identity(self.sample_num)
weight_1 = np.linalg.pinv(self.weight)
temp3 = data2.transpose()
error1 = np.dot(temp3, weight_1)
error = np.dot(error1, data2)
return error
def cons(self):
# 创建约束条件
if self.intercept == True:
cons = ({'type': 'eq', 'fun': lambda x: np.sum(x[:-1])- 1})
else:
cons = ({'type': 'eq', 'fun': lambda x: np.sum(x)- 1})
bnds = []
for i in range(self.feature_num):
bnds.append((0, 1))
if self.intercept == True:
bnds.append((None, None))
bnds = tuple(bnds)
return cons, bnds
def fit(self, X, y):
# 模型训练
X_train = X
y_train = y
sample_num = X_train.shape[0]
feature_num = X_train.shape[1]
self.feature_num = feature_num
self.sample_num = sample_num
# 创造截距项
if self.intercept == True:
X_train['inter'] = 1
B_ini = list(np.zeros(feature_num) + 1/feature_num)
B_ini.append(0.5)
fun = lambda x: self.weight_error(B=x, X_train=X_train, y_train=y_train)
cons, bnds = self.cons()
res = minimize(fun, B_ini, method='SLSQP', constraints=cons, tol=1e-5, bounds = bnds)
return res.x
if __name__ == "__main__":
data = pd.read_csv('YFDdata.csv')
X_train = data[['bond', 'Conbond', 'Credebt']]
y_train = data['YFD']
model = Constrained_regression()
conf = model.fit(X_train, y_train)

第一次写CSDN,比较简单,下次会修正

最后

以上就是笨笨啤酒为你收集整理的有约束条件的加权最小二乘的实现加权最小二乘原理Python实现的全部内容,希望文章能够帮你解决有约束条件的加权最小二乘的实现加权最小二乘原理Python实现所遇到的程序开发问题。

如果觉得靠谱客网站的内容还不错,欢迎将靠谱客网站推荐给程序员好友。

本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
点赞(36)

评论列表共有 0 条评论

立即
投稿
返回
顶部