我正在尝试在Tensorflow中进行逻辑回归,具有2个成本函数:dim = train_X.shape[1]X = tf.placeholder(tf.float32, shape=(None, dim))y = tf.placeholder(tf.float32, shape=(None,1))W = tf.Variable(tf.zeros(shape=(dim,1)))b = tf.Variable(tf.zeros([1]))y_pred = tf.nn.sigmoid(tf.add(tf.matmul(X,W), b)) # using matmul for matrix multiplication. x.shape(768,8) w.shape(8,1)cost = tf.reduce_mean(tf.add(-tf.multiply(y, tf.log(y_pred)), -tf.multiply(1-y, tf.log(1-y_pred))))cost2 = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=y_pred, labels=y))train = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)train2 = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost2)这两种成本函数给出不同的结果,尽管我的理解是它们应该给出相同的结果。session = tf.Session()session.run(init)print(session.run(cost, feed_dict={X:train_X, y:train_y}))print(session.run(cost2, feed_dict={X:train_X, y:train_y}))有人可以解释为什么会发生这种情况,以及我应该做些什么来让他们显示相同的结果吗?
1 回答

杨__羊羊
TA贡献1943条经验 获得超7个赞
cost2 变量的预测变量应为:
y_pred2 = tf.add(tf.matmul(X,W2), b2)
因为 已经合并了 sigmoid 函数。tf.nn.sigmoid_cross_entropy_with_logits
添加回答
举报
0/150
提交
取消