# 【机器学习】线性回归——单变量梯度下降的实现（Python版）

2018.10.08 18:51 3034浏览

## 【Python代码实现】

``````#此处数据集，采用吴恩达第一次作业的数据集:ex1data1.txt
import numpy as np
import matplotlib.pyplot as plt

#读取数据
return data
#损失函数，返回损失函数计算结果
def costFunction(theta_0,theta_1,x,y,m):
predictValue = theta_0+theta_1*x
return sum((predictValue-y)**2)/(2*m)

#梯度下降算法
#data:数据
#theta_0、theta_1:参数θ_0、θ_1
#iterations:迭代次数
#alpha:步长（学习率）
eachIterationValue = np.zeros((iterations,1))
x = data[:,0]
y = data[:,1]
m =data.shape[0]
for i in range(0,iterations):
hypothesis = theta_0+theta_1*x
temp_0 = theta_0-alpha*((1/m)*sum(hypothesis-y))
temp_1 = theta_1-alpha*(1/m)*sum((hypothesis-y)*x)
theta_0 = temp_0
theta_1 = temp_1
costFunction_temp = costFunction(theta_0,theta_1,x,y,m)
eachIterationValue[i,0] =costFunction_temp
return theta_0,theta_1,eachIterationValue

if __name__ == "__main__":
iterations=1500
plt.scatter(data[:,0],data[:,1],color='g',s=20)
hypothesis = theta_0+theta_1*data[:,0]
plt.plot(data[:,0],hypothesis)
plt.title('Fittingcurve')
plt.show()
plt.plot(np.arange(iterations),eachIterationValue)
plt.title('CostFunction')
plt.show()
``````

7人点赞

• 7
• 1
• 收藏
• 共同学习，写下你的评论

0/150