人工智能会伤害人类吗?怎样控制他们? (6)

如果仔细观察的话,你会发现H_sigmoid矩阵正是我们估计sigmoid函数时需要用到的矩阵。:)最后,我们用下面的代码来训练我们的神经网络。如果你对神经网络的概念还不清楚,可以自行搜索或者查阅作者的博文A Neural Network in 11 Lines of Python,这里的异或神经网络基本上是参考这篇文章的,只是根据上面的加密操作函数做了相应替换。

np.random.seed(1234) input_dataset = [[],[0],[1],[0,1]] output_dataset = [[0],[1],[1],[0]] input_dim = 3 hidden_dim = 4 output_dim = 1 alpha = 0.015 # 利用公钥实现单向加密 y = list() for i in range(4): y.append(one_way_encrypt_vector(output_dataset[i],scaling_factor)) # 生成权重值 syn0_t = (np.random.randn(input_dim,hidden_dim) * 0.2) - 0.1 syn1_t = (np.random.randn(output_dim,hidden_dim) * 0.2) - 0.1 # 单向加密权重值 syn1 = list() for row in syn1_t: syn1.append(one_way_encrypt_vector(row,scaling_factor).astype('int64')) syn0 = list() for row in syn0_t: syn0.append(one_way_encrypt_vector(row,scaling_factor).astype('int64')) # 开始训练 for iter in range(1000): decrypted_error = 0 encrypted_error = 0 for row_i in range(4): if(row_i == 0): layer_1 = sigmoid(syn0[0]) elif(row_i == 1): layer_1 = sigmoid((syn0[0] + syn0[1])/2.0) elif(row_i == 2): layer_1 = sigmoid((syn0[0] + syn0[2])/2.0) else: layer_1 = sigmoid((syn0[0] + syn0[1] + syn0[2])/3.0) layer_2 = (innerProd(syn1[0],layer_1,M_onehot[len(layer_1) - 2][0],l) / float(scaling_factor))[0:2] layer_2_delta = add_vectors(layer_2,-y[row_i]) syn1_trans = transpose(syn1) one_minus_layer_1 = [(scaling_factor * c_ones[len(layer_1) - 2]) - layer_1] sigmoid_delta = elementwise_vector_mult(layer_1,one_minus_layer_1[0],scaling_factor) layer_1_delta_nosig = mat_mul_forward(layer_2_delta,syn1_trans,1).astype('int64') layer_1_delta = elementwise_vector_mult(layer_1_delta_nosig,sigmoid_delta,scaling_factor) * alpha syn1_delta = np.array(outer_product(layer_2_delta,layer_1)).astype('int64') syn1[0] -= np.array(syn1_delta[0]* alpha).astype('int64') syn0[0] -= (layer_1_delta).astype('int64') if(row_i == 1): syn0[1] -= (layer_1_delta).astype('int64') elif(row_i == 2): syn0[2] -= (layer_1_delta).astype('int64') elif(row_i == 3): syn0[1] -= (layer_1_delta).astype('int64') syn0[2] -= (layer_1_delta).astype('int64') # 为了监视训练情况,下面我将解码损失值 # 如果当前环境并不安全,我会将加密后的损失值发送至安全的地方解码 encrypted_error += int(np.sum(np.abs(layer_2_delta)) / scaling_factor) decrypted_error += np.sum(np.abs(s_decrypt(layer_2_delta).astype('float')/scaling_factor)) sys.stdout.write("\r Iter:" + str(iter) + " Encrypted Loss:" + str(encrypted_error) + " Decrypted Loss:" + str(decrypted_error) + " Alpha:" + str(alpha)) # 使输出更美观 if(iter % 10 == 0): print() # 加密误差达到指定值后停止训练 if(encrypted_error < 25000000): break print("\nFinal Prediction:") for row_i in range(4): if(row_i == 0): layer_1 = sigmoid(syn0[0]) elif(row_i == 1): layer_1 = sigmoid((syn0[0] + syn0[1])/2.0) elif(row_i == 2): layer_1 = sigmoid((syn0[0] + syn0[2])/2.0) else: layer_1 = sigmoid((syn0[0] + syn0[1] + syn0[2])/3.0) layer_2 = (innerProd(syn1[0],layer_1,M_onehot[len(layer_1) - 2][0],l) / float(scaling_factor))[0:2] print("True Pred:" + str(output_dataset[row_i]) + " Encrypted Prediction:" + str(layer_2) + " Decrypted Prediction:" + str(s_decrypt(layer_2) / scaling_factor)) Iter:0 Encrypted Loss:84890656 Decrypted Loss:2.529 Alpha:0.015 Iter:10 Encrypted Loss:69494197 Decrypted Loss:2.071 Alpha:0.015 Iter:20 Encrypted Loss:64017850 Decrypted Loss:1.907 Alpha:0.015 Iter:30 Encrypted Loss:62367015 Decrypted Loss:1.858 Alpha:0.015 Iter:40 Encrypted Loss:61874493 Decrypted Loss:1.843 Alpha:0.015 Iter:50 Encrypted Loss:61399244 Decrypted Loss:1.829 Alpha:0.015 Iter:60 Encrypted Loss:60788581 Decrypted Loss:1.811 Alpha:0.015 Iter:70 Encrypted Loss:60327357 Decrypted Loss:1.797 Alpha:0.015 Iter:80 Encrypted Loss:59939426 Decrypted Loss:1.786 Alpha:0.015 Iter:90 Encrypted Loss:59628769 Decrypted Loss:1.778 Alpha:0.015 Iter:100 Encrypted Loss:59373621 Decrypted Loss:1.769 Alpha:0.015 Iter:110 Encrypted Loss:59148014 Decrypted Loss:1.763 Alpha:0.015 Iter:120 Encrypted Loss:58934571 Decrypted Loss:1.757 Alpha:0.015 Iter:130 Encrypted Loss:58724873 Decrypted Loss:1.75 Alpha:0.0155 Iter:140 Encrypted Loss:58516008 Decrypted Loss:1.744 Alpha:0.015 Iter:150 Encrypted Loss:58307663 Decrypted Loss:1.739 Alpha:0.015 Iter:160 Encrypted Loss:58102049 Decrypted Loss:1.732 Alpha:0.015 Iter:170 Encrypted Loss:57863091 Decrypted Loss:1.725 Alpha:0.015 Iter:180 Encrypted Loss:55470158 Decrypted Loss:1.653 Alpha:0.015 Iter:190 Encrypted Loss:54650383 Decrypted Loss:1.629 Alpha:0.015 Iter:200 Encrypted Loss:53838756 Decrypted Loss:1.605 Alpha:0.015 Iter:210 Encrypted Loss:51684722 Decrypted Loss:1.541 Alpha:0.015 Iter:220 Encrypted Loss:54408709 Decrypted Loss:1.621 Alpha:0.015 Iter:230 Encrypted Loss:54946198 Decrypted Loss:1.638 Alpha:0.015 Iter:240 Encrypted Loss:54668472 Decrypted Loss:1.63 Alpha:0.0155 Iter:250 Encrypted Loss:55444008 Decrypted Loss:1.653 Alpha:0.015 Iter:260 Encrypted Loss:54094286 Decrypted Loss:1.612 Alpha:0.015 Iter:270 Encrypted Loss:51251831 Decrypted Loss:1.528 Alpha:0.015 Iter:276 Encrypted Loss:24543890 Decrypted Loss:0.732 Alpha:0.015 Final Prediction: True Pred:[0] Encrypted Prediction:[-3761423723.0718255 0.0] Decrypted Prediction:[-0.112] True Pred:[1] Encrypted Prediction:[24204806753.166267 0.0] Decrypted Prediction:[ 0.721] True Pred:[1] Encrypted Prediction:[23090462896.17028 0.0] Decrypted Prediction:[ 0.688] True Pred:[0] Encrypted Prediction:[1748380342.4553354 0.0] Decrypted Prediction:[ 0.052]

内容版权声明:除非注明,否则皆为本站原创文章。

转载注明出处:https://www.heiqu.com/wpdxwy.html