-
- Downloads
Removed padding at the end of decoder input, as this caused undesired output
The system was padded with an extra output, weighted zero, to match lengths between decoder inputs and targets. As the test code creates a longer output sequence than the training code, any padding is undesirable, and was removed. This means that a number of refactors took place: Input data to model renamed from encoder/decoder to observations/future. This is because the Targets contain the last timestep, and the decoders do not have the last data point. So the input had to be renamed, as decoder input = [GO + future[0:-2]] and targets=future[1:-1] so neither targets nor decoders contained the whole input sequence, as the GO padding was done at data generation ( not inside the model). Renaming decoder_inputs meant encoder_inputs were named obersvations for consistency. The GO datapoint has also been removed from the graph plot. I speculate that the padding meant the longer test sequences collapsed. Testing required.
Loading
Please register or sign in to comment