Quantifying the Dynamics of Data Augmentation Hyperparameters for Facial Expression Recognition
Keywords:facial recognition, data augmentation
Automated recognition of facial expressions is a central component of systems used in an expanding array of domains. For a computer to automatically recognize affect, copious amounts of data are required to successfully train the model. It can often take a lot of work to collect and label data. In recent years, researchers have applied numerous data augmentation strategies to increase the diversity of the data within training datasets. Here, I examined the most common data augmentation strategies to determine which strategies result in higher performance for the facial expression recognition machine learning model. I first tested each data augmentation technique by itself and compared their performances. I next ran an ablation study with the augmentation strategies. I then analyzed the effect of dataset size on the marginal contribution of data augmentation. I find that augmentation does not always improve performance. When the dataset size is small, it results in a degradation of model performance. The accuracy of models with data augmentation starts to outperform the models with no data augmentation when the training dataset size is greater than a certain threshold. These results highlight the importance of considering dataset size when applying data augmentation to computer vision.
Tong, Xiaoyun, Songlin Sun, and Meixia Fu. "Data augmentation and second-order pooling for facial expression recognition." IEEE Access 7 (2019): 86821-86828.
Porcu, Simone, Alessandro Floris, and Luigi Atzori. "Evaluation of data augmentation techniques for facial expression recognition systems." Electronics 9.11 (2020): 1892.
Zhong, Zhun, et al. "Random erasing data augmentation." Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 07. 2020.
Xu, Tian, et al. "Investigating bias and fairness in facial expression recognition." European Conference on Computer Vision. Springer, Cham, 2020.
Ahmed, Tawsin Uddin, et al. "Facial expression recognition using convolutional neural network with data augmentation." 2019 Joint 8th International Conference on Informatics, Electronics & Vision (ICIEV) and 2019 3rd International Conference on Imaging, Vision & Pattern Recognition (icIVPR). IEEE, 2019.
Kuo, Chieh-Ming, Shang-Hong Lai, and Michel Sarkis. "A compact deep learning model for robust facial expression recognition." Proceedings of the IEEE conference on computer vision and pattern recognition workshops. 2018.
Liu, Ping, et al. "Point adversarial self mining: A simple method for facial expression recognition in the wild." arXiv preprint arXiv:2008.11401 (2020).
Pitaloka, Diah Anggraeni, et al. "Enhancing CNN with preprocessing stage in automatic emotion recognition." Procedia computer science 116 (2017): 523-529.
Goodfellow, Ian, et al. "Generative adversarial networks." Communications of the ACM 63.11 (2020): 139-144.
Wei, Yunchao, et al. "Object region mining with adversarial erasing: A simple classification to semantic segmentation approach." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
Longpre, Shayne, and Ajay Sohmshetty. "Facial keypoint detection." Facial Detection Kaggle competition (2016).
Dileep, Prathima, Bharath Kumar Bolla, and Sabeesh Ethiraj. "Revisiting Facial Key Point Detection: An Efficient Approach Using Deep Neural Networks." arXiv preprint arXiv:2205.07121 (2022).
Agrawal, Abhinav, and Namita Mittal. "Using CNN for facial expression recognition: a study of the effects of kernel size and number of filters on accuracy." The Visual Computer 36.2 (2020): 405-412.
Yang, Huiyuan, Han Yu, and Akane Sano. "Empirical Evaluation of Data Augmentations for Biobehavioral Time Series Data with Deep Learning." arXiv preprint arXiv:2210.06701 (2022).
Bayer, Markus, et al. "Data augmentation in natural language processing: a novel text generation approach for long and short text classifiers." International journal of machine learning and cybernetics (2022): 1-16.
Liu, Zhentao, et al. "A facial expression emotion recognition based human-robot interaction system." IEEE/CAA Journal of Automatica Sinica 4.4 (2017): 668-676.
Spezialetti, Matteo, Giuseppe Placidi, and Silvia Rossi. "Emotion recognition for human-robot interaction: Recent advances and future perspectives." Frontiers in Robotics and AI (2020): 145.
Perez-Gaspar, Luis-Alberto, Santiago-Omar Caballero-Morales, and Felipe Trujillo-Romero. "Multimodal emotion recognition with evolutionary computation for human-robot interaction." Expert Systems with Applications 66 (2016): 42-61.
Deng, Jia, et al. "cGAN based facial expression recognition for human-robot interaction." IEEE Access 7 (2019): 9848-9859.
Chen, Luefeng, et al. "Two-layer fuzzy multiple random forest for speech emotion recognition in human-robot interaction." Information Sciences 509 (2020): 150-163.
Kline, Aaron, et al. "Superpower glass." GetMobile: Mobile Computing and Communications 23.2 (2019): 35-38.
Voss, Catalin, et al. "Superpower glass: delivering unobtrusive real-time social cues in wearable systems." Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. 2016.
Voss, Catalin, et al. "Effect of wearable digital intervention for improving socialization in children with autism spectrum disorder: a randomized clinical trial." JAMA pediatrics 173.5 (2019): 446-454.
Daniels, Jena, et al. "Exploratory study examining the at-home feasibility of a wearable tool for social-affective learning in children with autism." NPJ digital medicine 1.1 (2018): 32.
Washington, Peter, et al. "A wearable social interaction aid for children with autism." Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems. 2016.
Washington, Peter, et al. "SuperpowerGlass: a wearable aid for the at-home therapy of children with autism." Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies 1.3 (2017): 1-22.
Kalantarian, Haik, et al. "A gamified mobile system for crowdsourcing video for autism research." 2018 IEEE international conference on healthcare informatics (ICHI). IEEE, 2018.
Kalantarian, Haik, et al. "The performance of emotion classifiers for children with parent-reported autism: quantitative feasibility study." JMIR mental health 7.4 (2020): e13174.
Penev, Yordan, et al. "A mobile game platform for improving social communication in children with autism: a feasibility study." Applied clinical informatics 12.05 (2021): 1030-1040.
Deveau, Nicholas, et al. "Machine learning models using mobile game play accurately classify children with autism." Intelligence-Based Medicine 6 (2022): 100057.
Chi, Nathan A., et al. "Classifying Autism from Crowdsourced Semi-Structured Speech Recordings: A Machine Learning Approach." arXiv preprint arXiv:2201.00927 (2022).
Zepf, Sebastian, et al. "Driver emotion recognition for intelligent vehicles: A survey." ACM Computing Surveys (CSUR) 53.3 (2020): 1-30.
Lisetti, Christine L., and Fatma Nasoz. "Affective intelligent car interfaces with emotion recognition." Proceedings of 11th International Conference on Human Computer Interaction, Las Vegas, NV, USA. 2005.
Leng, H., Y. Lin, and L. A. Zanzi. "An experimental study on physiological parameters toward driver emotion recognition." Ergonomics and Health Aspects of Work with Computers: International Conference, EHAWC 2007, Held as Part of HCI International 2007, Beijing, China, July 22-27, 2007. Proceedings. Springer Berlin Heidelberg, 2007.
Li, Wenbo, et al. "Cogemonet: A cognitive-feature-augmented driver emotion recognition model for smart cockpit." IEEE Transactions on Computational Social Systems 9.3 (2021): 667-678.
Paikrao, Pavan D., et al. "Smart emotion recognition framework: A secured IOVT perspective." IEEE Consumer Electronics Magazine 12.1 (2021): 80-86.
Katsis, Christos D., et al. "Emotion recognition in car industry." Emotion Recognition: A Pattern Analysis Approach (2015): 515-544.
Xiao, Huafei, et al. "On-road driver emotion recognition using facial expression." Applied Sciences 12.2 (2022): 807.
Van Dyk, David A., and Xiao-Li Meng. "The art of data augmentation." Journal of Computational and Graphical Statistics 10.1 (2001): 1-50.
Shorten, Connor, and Taghi M. Khoshgoftaar. "A survey on image data augmentation for deep learning." Journal of big data 6.1 (2019): 1-48.
Lucey, Patrick, et al. "The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression." 2010 ieee computer society conference on computer vision and pattern recognition-workshops. IEEE, 2010.
Copyright (c) 2023 Ryan Lin
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.