Use of Error Level Analysis (ELA) and Convolutional Neural Networks (CNN) to identify altered images has been saved
Use of Error Level Analysis (ELA) and Convolutional Neural Networks (CNN) to identify altered images
By Danielle Gewurz, Michael Greene, Kelly Lewis, and Steven Hanes
A conference poster delivered at the American Statistical Association’s Conference on Statistical Practice on February 2, 2022.
Increasingly, agencies are receiving digitally altered documents making identity verification a common challenge for many government benefits programs. Manually reviewing each is infeasible, and automated identification of potential alterations such as cut-and-paste would save time.
In this use case, multiple images are combined to create a new ID featuring a new photo and personal information. In the example shown,1 both pictures on the ID were overlayed with a new headshot and the name was altered.
While the use case is detection of altered IDs, we trained on the CASIA IDTE V2 dataset2 of depicting tampered images of everyday objects to avoid sensitive information. The dataset contains 5,123 realistically altered images that emulate common manipulation techniques as well as 7,491 unaltered images, divided into an 80/20 training and testing split. We also tested the models on seven authentic and eight tampered driver’s license templates.
After performing a discrete cosine transform (DCT), digits of the quantized image follow generalized Benford’s law,3 where the frequency distribution of leading digits is skewed toward smaller numbers. The Benford’s model extracts the image’s deviation from the expected distribution as a feature and predicts its probability of image alteration. A larger image divergence is associated with an increased likelihood of image alteration.
Error level analysis (ELA) and convolutional neural networks (CNN)
Error level analysis4 is a technique used to preprocess the image and compares the original image to a recompressed version. The image is colored based on the JPEG compression ratio in each pixel. A difference in the level of compression artifacts in the image may indicate that the image was altered. Once the image has been fed through ELA, in grayscale or in color, we train a Convolutional Neural Network to identify whether the image has been altered.
SHapley Additive exPlanations (SHAP) explainability
In order to assist reviewers who are validating images, we used the SHAP explainer5 to create a heat map for each image, highlighting regions that contributed to a positive or negative prediction. To visualize the results on the original image, the team used the “inpaint telea” masker method in SHAP, with explainer hyperparameters of 300 max evaluations and batch size 50.
Both models performed well on the CASIA test set. The use of a CNN on ELA preprocessed images outperformed basic unsupervised Benford’s analysis by 7.8%.
Image forensic work can be challenging and does not have a universal solution. Combining traditional image forensic techniques with statistical and machine learning techniques can point to alterations with a high degree of reliability. Adapting from existing work, it is possible to train well-performing algorithms to identify alterations. SHAP can be used to offer a visual explainability component for such efforts.
1 License template: http://s.driving-tests.org/img/license/south-dakota-drivers-license.jpg Face: https://susanqq.github.io/UTKFace/.
2 Yue Zheng, “modified CASIA,” IEEE Dataport, October 11, 2019, DOI: https://dx.doi.org/10.21227/c1h8-kf39.
3 Nicolò Bonettini et al., “On the use of Benford’s law to detect GAN-generated images,” 2020 25th International Conference on Pattern Recognition (ICPR), pp. 5495–5502, IEEE, 2021.
4 Ida Bagus Kresna Sudiatmika et al., “Image forgery detection using error level analysis and deep learning,” TELKOMNIKA 17, no. 2 (2018): p. 653, DOI: 10/12928/telkomnika.v17i2.8976.
5 Scott Lundberg and Su-In Lee, “A unified approach to interpreting model predictions,” Advances in Neural Information Processing Systems 30 (2017): pp. 4765–74.
The preceding is content from a conference poster delivered at the American Statistical Association’s Conference on Statistical Practice on February 2, 2022. If you have questions, please contact the authors or Tasha Austin at email@example.com to discuss.