This week: seq2seq.
I-Various sequence to sequence architectures
e.g. Machine translation
encoder network: many-to-one RNN
decoder network: one-to-many RNN
This architecture also works for image captioning: use ConvNet as encoder
Difference between seq2seq and generating new text with language model: seq2seq don't randomly choose a translation, but …