We have introduced the Input-Output Temporal Restricted Boltzmann Machine, a probabilistic model for learning mappings between sequences. We presented two variants of the model, one with pairwise and one with third-order multiplicative interactions. Our experiments so far are limited to dynamic facial expression transfer, but nothing restricts the model to this domain.
Current methods for facial expression transfer are unable to factor out style in the retargeted motion, making it difficult to adjust the emotional content of the resulting facial animation. We are therefore interested in exploring extensions of our model that include style-based contextual variables.
Zeiler, Matthew D., Graham W. Taylor, Leonid Sigal, Iain Matthews, and Rob Fergus. “Facial expression transfer with input-output temporal restricted boltzmann machines.” In Advances in Neural Information Processing Systems, pp. 1629-1637. 2011.