#tensorflow #conv-neural-network #transfer-learning
Вопрос:
Я пытаюсь привлечь внимание к предварительно обученной сети vgg16. Я пытаюсь получить выходную форму последнего слоя, но она выдает ошибку. Это и есть код,
img_shape = (224,224,3)
in_lay = Input(img_shape)
base_pretrained_model = VGG16(input_shape = img_shape,
include_top = False, weights = 'imagenet')
base_pretrained_model.trainable = False
pt_depth = base_pretrained_model.get_output_shape_at(0)[-1]
pt_features = base_pretrained_model(in_lay)
bn_features = BatchNormalization()(pt_features)
attn_layer = Conv2D(64, kernel_size = (1,1), padding = 'same', activation = 'relu')(bn_features)
attn_layer = Conv2D(16, kernel_size = (1,1), padding = 'same', activation = 'relu')(attn_layer)
attn_layer = Conv2D(1,
kernel_size = (1,1),
padding = 'valid',
activation = 'sigmoid')(attn_layer)
up_c2_w = np.ones((1, 1, 1, pt_depth))
up_c2 = Conv2D(pt_depth, kernel_size = (1,1), padding = 'same',
activation = 'linear', use_bias = False, weights = [up_c2_w])
up_c2.trainable = False
attn_layer = up_c2(attn_layer)
mask_features = multiply([attn_layer, bn_features])
gap_features = GlobalAveragePooling2D()(mask_features)
gap_mask = GlobalAveragePooling2D()(attn_layer)
gap = Lambda(lambda x: x[0]/x[1], name = 'RescaleGAP')([gap_features, gap_mask])
gap_dr = Dropout(0.5)(gap)
dr_steps = Dropout(0.25)(Dense(128, activation = 'elu')(gap_dr))
out_layer = Dense(1, activation = 'sigmoid')(dr_steps)
tb_model = Model(inputs = [in_lay], outputs = [out_layer])
tb_model.compile(optimizer = 'adam', loss = 'binary_crossentropy',
metrics = ['binary_accuracy'])
tb_model.summary()
Я получаю ошибку в 6-й строке, в которой говорится,
RuntimeError: The layer has never been called and thus has no defined output shape.
Ответ №1:
Вместо
pt_depth = base_pretrained_model.get_output_shape_at(0)[-1]
Попробуй вот это:
pt_depth = base_pretrained_model.layers[-1].output_shape
Так как, include_top=False, вывод будет: (Нет, 7, 7, 512), то есть форма последнего слоя «block5_pool (MaxPooling2D)»