Ошибка значения: слой lstm_3 ожидает 35 входных данных, но он получил 3 входных тензора

#tensorflow #keras #deep-learning #lstm #encoder-decoder

#tensorflow #keras #глубокое обучение #lstm #кодировщик-декодер

Вопрос:

Я пытаюсь построить сеть декодера кодировщика последовательности в последовательности для перевода языка (с английского на французский), я использую три уровня BLSTM с выпадением в качестве кодировщика и один декодер LSTM.

Для модели и подгонки все в порядке, но я продолжаю получать ошибку в модели вывода.

Ошибка гласит:

 ValueError: Layer lstm_3 expects 35 inputs, but it received 3 input tensors. Inputs received: [<tf.Tensor 'embedding_1/embedding_lookup_25/Identity_1:0' shape=(None, None, 128) dtype=float32>, <tf.Tensor 'input_87:0' shape=(None, 128) dtype=float32>, <tf.Tensor 'input_88:0' shape=(None, 128) dtype=float32>]
  

Это моя модель:

 latent_dim = 128 

# Encoder 
encoder_inputs = Input(shape=(max_length_english,)) 
enc_emb = Embedding(vocab_size_source, latent_dim,trainable=True)(encoder_inputs) 

#LSTM 1 
encoder_lstm1 = LSTM(latent_dim, recurrent_dropout= 0.6,return_sequences=True,return_state=True) 
encoder_output1, state_h1, state_c1 = encoder_lstm1(enc_emb) 

#LSTM 2 
encoder_lstm2 = LSTM(latent_dim, recurrent_dropout= 0.6,return_sequences=True,return_state=True) 
encoder_output2, state_h2, state_c2 = encoder_lstm2(encoder_output1) 

#LSTM 3 
encoder_lstm3=LSTM(latent_dim, recurrent_dropout= 0.6, return_state=True, return_sequences=True) 
encoder_outputs, state_h, state_c= encoder_lstm3(encoder_output2) 

# Set up the decoder. 
decoder_inputs = Input(shape=(None,)) 
dec_emb_layer = Embedding(vocab_size_target, latent_dim,trainable=True) 
dec_emb = dec_emb_layer(decoder_inputs) 

#LSTM using encoder_states as initial state
decoder_lstm = LSTM(latent_dim, recurrent_dropout= 0.6, return_sequences=True, return_state=True) 
decoder_outputs,decoder_fwd_state, decoder_back_state = decoder_lstm(dec_emb,initial_state=[state_h, state_c]) 


#Dense layer
decoder_dense = Dense(vocab_size_target, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs) 

# Define the model
model1 = Model([encoder_inputs, decoder_inputs], decoder_outputs) 
  

И это мой режим вывода:

 latent_dim=128

# encoder inference
encoder_inputs = model_loaded.input[0]  #loading encoder_inputs
encoder_outputs, state_h, state_c = model_loaded.layers[6].output #loading encoder_outputs

print(encoder_outputs.shape)

encoder_model = Model(inputs=encoder_inputs,outputs=[encoder_outputs, state_h, state_c])

# decoder inference
# Below tensors will hold the states of the previous time step
decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_hidden_state_input = Input(shape=(32,latent_dim))

# Get the embeddings of the decoder sequence
decoder_inputs = model_loaded.layers[3].output

print(decoder_inputs.shape)
dec_emb_layer = model_loaded.layers[5]

dec_emb2= dec_emb_layer(decoder_inputs)

# To predict the next word in the sequence, set the initial states to the states from the previous time step
decoder_lstm = model_loaded.layers[7]
decoder_outputs2, state_h2, state_c2 = decoder_lstm(dec_emb2, initial_state=[decoder_state_input_h, decoder_state_input_c])


# A dense softmax layer to generate prob dist. over the target vocabulary
decoder_dense = model_loaded.layers[8]
decoder_outputs = decoder_dense(decoder_outputs2)

# Final decoder model
decoder_model = Model(
[decoder_inputs]   [decoder_hidden_state_input,decoder_state_input_h, decoder_state_input_c],
[decoder_outputs2]   [state_h2, state_c2])
  

Для оптимизации rmsprop и потери

 model1.compile(optimizer='rmsprop' and for loss 'sparse_categorical_crossentropy'

              loss='sparse_categorical_crossentropy', #sparse_categorical_crossentropy 
              metrics=['accuracy'])
  

И, наконец, для val_loss и val_accury после 57 эпох у меня есть этот результат:

 Epoch 57/100
55/55 [==============================] - 197s 4s/step - loss: 0.7188 - accuracy: 0.8474 - val_loss: 0.9559 - val_accuracy: 0.8271
Epoch 00057: early stopping