Authors
Vikram V Ramaswamy, Sunnie SY Kim, Olga Russakovsky
Publication date
2021
Conference
Proceedings of the IEEE/CVF conference on computer vision and pattern recognition
Pages
9301-9310
Description
Fairness in visual recognition is becoming a prominent and critical topic of discussion as recognition systems are deployed at scale in the real world. Models trained from data in which target labels are correlated with protected attributes (eg, gender, race) are known to learn and exploit those correlations. In this work, we introduce a method for training accurate target classifiers while mitigating biases that stem from these correlations. We use GANs to generate realistic-looking images, and perturb these images in the underlying latent space to generate training data that is balanced for each protected attribute. We augment the original dataset with this generated data, and empirically demonstrate that target classifiers trained on the augmented dataset exhibit a number of both quantitative and qualitative benefits. We conduct a thorough evaluation across multiple target labels and protected attributes in the CelebA dataset, and provide an in-depth analysis and comparison to existing literature in the space. Code can be found at https://github. com/princetonvisualai/gan-debiasing.
Total citations
202120222023202411445124
Scholar articles
VV Ramaswamy, SSY Kim, O Russakovsky - Proceedings of the IEEE/CVF conference on computer …, 2021