TY - GEN
T1 - Conditional expression synthesis with face parsing transformation
AU - Lu, Zhihe
AU - Hu, Tanhao
AU - Song, Lingxiao
AU - Zhang, Zhaoxiang
AU - He, Ran
N1 - Publisher Copyright:
© 2018 Association for Computing Machinery.
PY - 2018/10/15
Y1 - 2018/10/15
N2 - Facial expression synthesis with various intensities is a challenging synthesis task due to large identity appearance variations and a paucity of efficient means for intensity measurement. This paper advances the expression synthesis domain by the introduction of a Couple-Agent Face Parsing based Generative Adversarial Network (CAFP-GAN) that unites the knowledge of facial semantic regions and controllable expression signals. Specially, we employ a face parsing map as a controllable condition to guide facial texture generation with a special expression, which can provide a semantic representation of every pixel of facial regions. Our method consists of two sub-networks: face parsing prediction network (FPPN) uses controllable labels (expression and intensity) to generate a face parsing map transformation that corresponds to the labels from the input neutral face, and facial expression synthesis network (FESN) makes the pretrained FPPN as a part of it to provide the face parsing map as a guidance for expression synthesis. To enhance the reality of results, couple-agent discriminators are served to distinguish fake-real pairs in both two sub-nets. Moreover, we only need the neutral face and the labels to synthesize the unknown expression with different intensities. Experimental results on three popular facial expression databases show that our method has the compelling ability on continuous expression synthesis.
AB - Facial expression synthesis with various intensities is a challenging synthesis task due to large identity appearance variations and a paucity of efficient means for intensity measurement. This paper advances the expression synthesis domain by the introduction of a Couple-Agent Face Parsing based Generative Adversarial Network (CAFP-GAN) that unites the knowledge of facial semantic regions and controllable expression signals. Specially, we employ a face parsing map as a controllable condition to guide facial texture generation with a special expression, which can provide a semantic representation of every pixel of facial regions. Our method consists of two sub-networks: face parsing prediction network (FPPN) uses controllable labels (expression and intensity) to generate a face parsing map transformation that corresponds to the labels from the input neutral face, and facial expression synthesis network (FESN) makes the pretrained FPPN as a part of it to provide the face parsing map as a guidance for expression synthesis. To enhance the reality of results, couple-agent discriminators are served to distinguish fake-real pairs in both two sub-nets. Moreover, we only need the neutral face and the labels to synthesize the unknown expression with different intensities. Experimental results on three popular facial expression databases show that our method has the compelling ability on continuous expression synthesis.
KW - Expression synthesis
KW - Face parsing
KW - Generative adversarial network
UR - https://www.scopus.com/pages/publications/85058237728
U2 - 10.1145/3240508.3240647
DO - 10.1145/3240508.3240647
M3 - Conference contribution
AN - SCOPUS:85058237728
T3 - MM 2018 - Proceedings of the 2018 ACM Multimedia Conference
SP - 1083
EP - 1091
BT - MM 2018 - Proceedings of the 2018 ACM Multimedia Conference
PB - Association for Computing Machinery, Inc
T2 - 26th ACM Multimedia conference, MM 2018
Y2 - 22 October 2018 through 26 October 2018
ER -