Speech-driven 3D facial animation has been widely studied, yet there is still a gap to achieving realism and vividness due to the highly ill-posed nature and scarcity of audio-visual data. Existing works typically formulate the cross-modal mapping into a regression task, which suffers from the regression-to-mean problem leading to over-smoothed facial motions. In this paper, we propose to cast speech-driven facial animation as a code query task in a finite proxy space of the learned codebook, which effectively promotes the vividness of the generated motions by reducing the cross-modal mapping uncertainty. The codebook is learned by self-reconstruction over real facial motions and thus embedded with realistic facial motion priors. Over the discrete motion space, a temporal autoregressive model is employed to sequentially synthesize facial motions from the input speech signal, which guarantees lip-sync as well as plausible facial expressions. We demonstrate that our approach outperforms current state-of-the-art methods both qualitatively and quantitatively. Also, a user study further justifies our superiority in perceptual quality.
CodeTalker first learns a discrete context-rich facial motion codebook by self-reconstruction learning over real facial motions.
It then autoregressively synthesize facial motions through code query conditioned on both the speech signals and past motions.
Visual comparisons of sampled facial motions animated by different methods on VOCA (left) and BIWI (right) dataset. The upper partition shows the facial animation conditioned on different speech parts, while the lower depicts the temporal statistics (mean and standard deviation) of adjacent-frame motion variations within a sequence.
@article{xing2023codetalker,
author = {Xing, Jinbo and Xia, Menghan and Zhang, Yuechen and Cun, Xiaodong and Wang, Jue and Wong, Tien-Tsin},
title = {CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior},
journal = {arXiv preprint arXiv:2301.02379},
year = {2023},
}