This repository has been archived by the owner on Jul 7, 2023. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
I was training a transformer model with a real-value output. I had my problems file set up properly, but when using
.infer
to load a trained model checkpoint and use it to run inference on some input, I was getting very obscure errors to do with tensor sizes. After some debugging it became apparent that the tensor2tensor framework was trying to look for a class to output, and did not accept that I wanted a real-valued output.Turns out this is due to a small typo in t2t_model - when checking if the output modality is real, it decided it was not and this caused the decoder part of the library to use fast_decode rather than the slow version that seems suitable for real valued predictions. It checks if the output modality name begins with "Real_" to determine this.
Unfortunately, the name of these output modalities are things like
real_l2_loss_modality
with a lowercase r...Hope this helps other people with the same issue!