GANDiffFace: controllable generation of synthetic datasets for face recognition with realistic variations
Entity
UAM. Departamento de Tecnología Electrónica y de las ComunicacionesPublisher
IEEEDate
2023-12-25Citation
10.1109/ICCVW60793.2023.00333
2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2023. 3078-3087
ISBN
979-8-3503-0744-3DOI
10.1109/ICCVW60793.2023.00333Funded by
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860813 - TReSPAsS-ETN. This study is supported by the project INTER-ACTION (PID2021- 126521OB-I00 MICINN/FEDER). This research is based upon work supported by the Hessian Ministry of the Interior and Sport in the course of the Bio4ensics project and by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENEProject
info:eu-repo/grantAgreement/EC/H2020/860813/EU/TReSPAsS-ETN; Gobierno de España. PID2021- 126521OB-I00Editor's Version
https://doi.org/10.1109/ICCVW60793.2023.00333Subjects
diffusion; face recognition; generative ai; stylegan; synthetic; TelecomunicacionesRights
© 2023 IEEEAbstract
Face recognition systems have significantly advanced in recent years, driven by the availability of large-scale datasets. However, several issues have recently came up, including privacy concerns that have led to the discontinuation of well-established public datasets. Synthetic datasets have emerged as a solution, even though current synthesis methods present other drawbacks such as limited intraclass variations, lack of realism, and unfair representation of demographic groups. This study introduces GAN-DiffFace, a novel framework for the generation of synthetic datasets for face recognition that combines the power of Generative Adversarial Networks (GANs) and Diffusion models to overcome the limitations of existing synthetic datasets. In GANDiffFace, we first propose the use of GANs to synthesize highly realistic identities and meet target demographic distributions. Subsequently, we fine-tune Diffusion models with the images generated with GANs, synthesizing multiple images of the same identity with a variety of accessories, poses, expressions, and contexts. We generate multiple synthetic datasets by changing GANDiffFace settings, and compare their mated and non-mated score distributions with the distributions provided by popular real-world datasets for face recognition, i.e. VGG2 and IJB-C. Our results show the feasibility of the proposed GANDiffFace, in particular the use of Diffusion models to enhance the (limited) intra-class variations provided by GANs towards the level of real-world datasets
Files in this item
Google Scholar:Melzi, Pietro
-
Rathgeb, Christian
-
Tolosana Moranchel, Rubén
-
Vera Rodríguez, Rubén
-
Lawatsch, Dominik
-
Domin, Florian
-
Schaubert, Maxim
This item appears in the following Collection(s)
Related items
Showing items related by title, author, creator and subject.