Society is paying more attention than ever to the question of bias in artificial intelligence systems, and particularly those used to recognize and analyze images of faces.
IBM is the following actions to ensure facial recognition technology is built and trained responsibly:
1) One of the biggest issues causing bias in the area of facial analysis is the lack of diverse data to train systems on. So, this fall, IBM will make publicly available the following as a tool for the technology industry and research community:
•A facial attribute and identity training dataset of over 1 million images to improve facial analysis system training built by IBM Research scientists. It’s annotated with attributes and identity, leveraging geo-tags from Flickr images to balance data from multiple countries and active learning tools to reduce sample selection bias.
Currently, the largest facial attribute dataset available is 200,000 images so this new dataset with a million images will be a monumental improvement. Additionally, data sets available today only include attributes (hair color, facial hair, etc) or identity (identifying that 5 images are of the same person) — but not both. This new dataset changes that to make a single capability to match attributes to an individual.
•A dataset which includes 36,000 facial images – equally distributed across all ethnicities, genders, and ages to provide a more diverse dataset for people to use in the evaluation of their technologies. This will specifically help algorithm designers to identify and address bias in their facial analysis systems. The first step in addressing bias is to know there is a bias — and that is what this dataset will enable.
2) Earlier this year, IBM substantially increased the accuracy of its Watson Visual Recognition service for facial analysis, which demonstrated a nearly ten-fold decrease in error-rate for facial analysis.
IBM is continuing to drive continual improvements. A technical workshop is being held (by IBM Research in collaboration with University of Maryland) to identify and reduce bias in facial analysis on Sept 14, 2018 in conjunction with ECCV 2018.
The results of the competition using the IBM facial image dataset will be announced at the workshop. Furthermore, IBM researchers continue to work with a broad range of stakeholders, users and experts to understand other biases and vulnerabilities that can affect AI decision-making, so that we can continue to make our systems better.
As the adoption of AI increases, the issue of preventing bias from entering into AI systems is rising to the forefront. We believe no technology — no matter how accurate — can or should replace human judgement, intuition and expertise.
The power of advanced innovations, like AI, lies in their ability to augment, not replace, human decision-making. It is therefore critical that any organization using AI — including visual recognition or video analysis capabilities — train the teams working with it to understand bias, including implicit and unconscious bias, monitor for it, and know how to address it.