Published: Fri, March 15, 2019
IT | By Lester Massey

IBM under fire for using Flickr photos for facial-recognition project

IBM under fire for using Flickr photos for facial-recognition project

Millions of people around the world are assisting the training of AI's facial recognition software without their knowledge or consent, new research has shown. They were a part of the YFCC100M, a dataset that consisted of 99.2 million photos and 0.8 million videos from Flickr. All the photos were shared under a Creative Commons license, which is typically a signal that they can be freely used, with some limitations.

Despite IBM's assurances that Flickr users can opt out of the database, NBC News discovered that it's nearly impossible to get photos removed.

But some of the photographers whose images were included in IBM's dataset were surprised when NBC News told them that their photographs had been annotated with details including facial geometry and skin tone and may be used to develop facial recognition algorithms.

The only problem? The dataset isn't publicly available, only to researchers, so Flickr users and those featured in their images have no way of really knowing if they're included.

Diversity in Faces is also intended for academic use, rather than improving IBM's commercial offerings, according to the company.

John Smith, who oversees AI research at IBM, said that the company was committed to "protecting the privacy of individuals" and "will work with anyone who requests a URL to be removed from the dataset".

IBM has promised to remove people from its dataset, but discovering that an image of you may be used in its database is where things get tricky. "You are laundering the IP and privacy rights out of the faces", he says.

Peverill-Conti had a total of 700 images, originally uploaded to Flickr, used in the data set, and said, "it seems a little sketchy that IBM can use these pictures without saying anything to anybody".

GDPR and Illinois' BIPA apply data protections that could make companies sharing photos or biometric data liable to penalties, but the legal position of such a claim has yet to be tested.

The report comes as the ethical implications of AI and facial recognition are increasingly a subject of public debate, one that biometrics providers must heed and participate in, or risk the long-term viability of the industry. "Facial recognition can be incredibly harmful when it's inaccurate and incredibly oppressive the more accurate it gets".

Like this: