1 回答

TA贡献2041条经验 获得超4个赞
很抱歉一开始误解了这个问题。这个问题非常棘手。并且问题很可能是由评论中提到的@Natthaphon 的 BatchNorm 层引起的,因为我在 VGG16 上尝试过,损失是匹配的。
然后我在 ResNet50 中进行了测试,即使我“冻结”了所有层,eval loss 和 fit loss 仍然不匹配。实际上,我手动检查了 BN 权重,它们确实没有改变。
from keras.applications import ResNet50, VGG16
from keras.applications.resnet50 import preprocess_input
from keras_preprocessing import image
import keras
from keras import backend as K
import numpy as np
img_path = '/home/zhihao/Downloads/elephant.jpeg'
img = image.load_img(img_path, target_size=(224, 224))
model = ResNet50(weights='imagenet')
for layer in model.layers:
layer.trainable = False
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
y = np.zeros((1, 1000))
y[0, 386] = 1
model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics=['categorical_accuracy'])
model.evaluate(x, y)
# 1/1 [==============================] - 2s 2s/step
# [0.2981376349925995, 1.0]
model.fit(x, y, validation_data=(x, y))
# Train on 1 samples, validate on 1 samples
# Epoch 1/1
# 1/1 [==============================] - 1s 549ms/step - loss: 5.3056 - categorical_accuracy: 0.0000e+00 - val_loss: 0.2981 - val_categorical_accuracy: 1.0000
我们可以注意到评估损失为 0.2981,拟合损失为 5.3056。我猜 Batch Norm 层在评估模式和训练模式之间有不同的行为。如我错了请纠正我。
真正冻结我发现的模型的一种方法是使用K.set_learning_phase(0)以下方法,
model = ResNet50(weights='imagenet')
K.set_learning_phase(0) # all new operations will be in test mode from now on
model.fit(x, y, validation_data=(x, y))
# Train on 1 samples, validate on 1 samples
# Epoch 1/1
# 1/1 [==============================] - 4s 4s/step - loss: 0.2981 - categorical_accuracy: 1.0000 - val_loss: 16.1181 - val_categorical_accuracy: 0.0000e+00
现在这两个损失是匹配的。
添加回答
举报