Seq2Seq
1. Tokenization & Build Dictionary
Eng_Tokenizer: "I_am_okay." => ['i', '_', 'a', 'm', ..., 'a', 'y']
Deu_Tokenizer: "Es geht mir gut" => ['e', 's', '_', ..., 'u', 't']Question: Why 2 different tokenizers and dictionaries?
Question: Why 2 different tokenizers and dictionaries?
2. One-Hot Encoding

3. Training Seq2Seq Model


4. Inference




Summary

Last updated