写给程序员的机器学习入门 (三) - 线性模型,激活函数与多层线性模型 (3)

输出结果如下:

dataset_x: tensor([[ 0.8487, 0.6920, -0.3160], [-2.1152, -0.3561, 0.4372], [ 0.4913, -0.2041, 0.1198], [ 1.2377, 1.1168, -0.2473], [-1.0438, -1.3453, 0.7854], [ 0.9928, 0.5988, -1.5551], [-0.3414, 1.8530, 0.4681], [-0.1577, 1.4437, 0.2660], [ 1.3894, 1.5863, 0.9463], [-0.8437, 0.9318, 1.2590], [ 2.0050, 0.0537, 0.4397], [ 0.1124, 0.6408, 0.4412], [-0.2159, -0.7425, 0.5627], [ 0.2596, 0.5229, 2.3022], [-1.4689, -1.5867, -0.5692], [ 0.9200, 1.1108, 1.2899], [-1.4782, 2.5672, -0.4731], [ 0.3356, -1.6293, -0.5497], [-0.4798, -0.4997, -1.0670], [ 1.1149, -0.1407, 0.8058]]) dataset_y: tensor([[ 9.2847], [ 6.4842], [ 8.4426], [10.7294], [ 6.6217], [ 5.5252], [12.7689], [11.5278], [15.4009], [12.7970], [11.4315], [10.7175], [ 7.9872], [16.2120], [ 1.6500], [15.0112], [10.2369], [ 3.4277], [ 3.3199], [11.2509]]) epoch: 1 loss: 142.77590942382812, weight: Parameter containing: tensor([[-0.0043, 0.3097, -0.4752]], requires_grad=True), bias: Parameter containing: tensor([-0.4249], requires_grad=True) validating x: tensor([[-0.4798, -0.4997, -1.0670], [ 0.8487, 0.6920, -0.3160], [ 0.1124, 0.6408, 0.4412], [-1.0438, -1.3453, 0.7854]]), y: tensor([[ 3.3199], [ 9.2847], [10.7175], [ 6.6217]]), predicted: tensor([[-0.1385], [ 0.3020], [-0.0126], [-1.1801]], grad_fn=<AddmmBackward>) validating accuracy: -0.04714548587799072 epoch: 2 loss: 131.40403747558594, weight: Parameter containing: tensor([[ 0.0675, 0.4937, -0.3163]], requires_grad=True), bias: Parameter containing: tensor([-0.1970], requires_grad=True) validating x: tensor([[-0.4798, -0.4997, -1.0670], [ 0.8487, 0.6920, -0.3160], [ 0.1124, 0.6408, 0.4412], [-1.0438, -1.3453, 0.7854]]), y: tensor([[ 3.3199], [ 9.2847], [10.7175], [ 6.6217]]), predicted: tensor([[-0.2023], [ 0.6518], [ 0.3935], [-1.1479]], grad_fn=<AddmmBackward>) validating accuracy: -0.03184401988983154 epoch: 3 loss: 120.98343658447266, weight: Parameter containing: tensor([[ 0.1357, 0.6687, -0.1639]], requires_grad=True), bias: Parameter containing: tensor([0.0221], requires_grad=True) validating x: tensor([[-0.4798, -0.4997, -1.0670], [ 0.8487, 0.6920, -0.3160], [ 0.1124, 0.6408, 0.4412], [-1.0438, -1.3453, 0.7854]]), y: tensor([[ 3.3199], [ 9.2847], [10.7175], [ 6.6217]]), predicted: tensor([[-0.2622], [ 0.9860], [ 0.7824], [-1.1138]], grad_fn=<AddmmBackward>) validating accuracy: -0.016991496086120605 省略途中输出 epoch: 637 loss: 0.001102567883208394, weight: Parameter containing: tensor([[1.0044, 2.0283, 3.0183]], requires_grad=True), bias: Parameter containing: tensor([7.9550], requires_grad=True) validating x: tensor([[-0.4798, -0.4997, -1.0670], [ 0.8487, 0.6920, -0.3160], [ 0.1124, 0.6408, 0.4412], [-1.0438, -1.3453, 0.7854]]), y: tensor([[ 3.3199], [ 9.2847], [10.7175], [ 6.6217]]), predicted: tensor([[ 3.2395], [ 9.2574], [10.6993], [ 6.5488]], grad_fn=<AddmmBackward>) validating accuracy: 0.9900396466255188 testing x: tensor([[-0.3414, 1.8530, 0.4681], [-1.4689, -1.5867, -0.5692], [ 1.1149, -0.1407, 0.8058], [ 0.3356, -1.6293, -0.5497]]), y: tensor([[12.7689], [ 1.6500], [11.2509], [ 3.4277]]), predicted: tensor([[12.7834], [ 1.5438], [11.2217], [ 3.3285]], grad_fn=<AddmmBackward>) testing accuracy: 0.9757462739944458

可以看到最终 weight 接近 1, 2, 3,bias 接近 8。和前一篇文章最后的例子比较还可以发现代码除了定义模型的部分以外几乎一模一样 (后面的代码基本上都是相同的结构,这个系列是先学套路在学细节

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/zzjsdx.html