As with many new technologies, there’s currently a lot of speculation surrounding the application of facial recognition capabilities, such as Amazon Rekognition.
The e-Commerce platform has defended its facial-recognition tool, Rekognition, against claims of racial and gender bias, following a study published by the Massachusetts Institute of Technology.
The study found that Amazon had an error rate of 31% when identifying the gender of images of women with dark skin. This compared with a 22.5% rate from Kairos, which offers a rival commercial product, and a 17% rate from IBM. By contrast Amazon, Microsoft and Kairos all successfully identified images of light-skinned men 100% of the time.
However, Matt Wood, general manager of AI at Amazon Web Services, hit back and said the researchers were studying an outdated version of Rekognition, and that it was trying to improve its product.
He also pointed out that when the service was being used by law enforcement, Amazon recommended a 99 per cent confidence threshold. The percentage defines how confident the system is output result.
In a blog post, Wood also highlighted several concerns about the study, including that it did not use the latest version of Rekognition.
He said the findings from MIT did not reflect Amazon’s own research, which had used 12,000 images of men and women of six different ethnicities.
“Across all ethnicities, we found no significant difference in accuracy with respect to gender classification,” he wrote.
He also said the company advised law enforcement to use machine-learning facial-recognition results when the certainty of the result was listed at 99% or higher only and never to use it as the sole source of identification.
“Keep in mind that our benchmark is not very challenging. We have profile images of people looking straight into a camera. Real-world conditions are much harder,” said MIT researcher Joy Buolamwini in a Medium post responding to Dr Wood’s criticisms.