Computer vision artificial intelligence has a problem with bias

Sight is a miracle: the relationship between reflection, refraction and the messages decoded by nerves in the brain.

When you look at an object, you are looking at a reflection of light entering your cornea in wavelengths. As it enters the cornea, light is refracted, or bent, toward the thin, filmy lens which further refracts light. The lens is fine tuned: it focuses light more directly on the retina, forming a smaller, more focused beam. In the retina, light stimulates photoreceptor cells called rods and cones. Think of rods, cones, and microscopic translators: they turn light into electrical impulses that are sent to the brain.

The impulses shoot down the optic nerve to the visual cortex, where the image is returned right side up. The cortex then interprets these signals and allows you to make meaningful decisions about them: “Look, it’s a dog! “

Sight is obviously not new to humans, but now computers are also learning to see. In fact, they are on the cusp of a new era – an era of vision.

Computer vision is a form of artificial intelligence (AI) focused on teaching computers to understand and interpret images.

The history of computer vision dates back to the late 1950s, with two scientists, a neuron on fire and a cat.

David Hubel and Torsten Wiesel were investigating the response of a cat’s visual experience (to have small spots of light or a black dot on a transparent glass slide projected onto a screen), and how neurons in areas of higher functioning of the brain reacted to the sight. After many frustrating tries without any useful reading, the two made an accidental discovery. Like the cats watched, one of the researchers accidentally moved the glass slide a bit too far, revealing its weak edge. This a single line moving across the screen at a particular angle caused the cat’s neuron to trigger. This error changed our view of visual processing.

How? ‘Or’ What? Researchers found that particular neurons in the visual cortex were responsible for responding to specific orientations, such as lines and angles. These and subsequent studies have shown how the visual system constructs an image from simple stimuli into more complex representations. TA happy accident established the basis for all deep learning models, especially those used in computer vision.

In the 1980s, advances in the development of computer vision were on the rise. In 1982, David Marr established an algorithmic structure for vision that could identify corners, edges, and other distinct visual features. by Kunihiko Fukushima Neocognitron created a simple yet complex self-organizing neural network model capable of recognizing patterns. These convoluted neural networks were found to be very effective for image recognition, however, they were difficult to apply to high resolution images, making web training very time consuming.

So what really got computer vision off the ground?

An AI competition in 2012.

At the time, typical top 5 error rates for visual recognition hovered around 26% (the top 5 error rate is the fraction of test images for which the correct label is among the 5 most likely), and it seemed that percentage was not changing. Then AlexNet arrived. The University of Toronto team created a Convolutional Neural Network, a deep learning model that identifies images by assigning weights and biases to elements of an image, which erased past error rates, with a top 5 error rate of 15.3%.

We have reached the point where, like humans, computers have a vision. But the problem in CV is not what computers can see, but rather what they cannot.

Computer vision depends on deep learning, a subfield of machine learning. In order to refine a computer’s “view”, it needs to be fed data – a lot of data. But there is a problem with this data: it is often biased.

In 2018, Joy Buolamwini published “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification”. It sounds like a bite to eat, but The nuances of gender of Buolamwini project changed the way we think about complexion and computer vision. The study chronicled the accuracy of three major, most popular and widely used gender classification algorithms (including Microsoft and IBM) in four classification groups: Lighter Male, Lighter Female, and Male darker and darker woman. Buolamwini found that, overall, each program had higher accuracy in individuals with lighter skin tone, with the error between lighter and darker skin ranging from 11.8% to 19.2%. That in itself was of concern: the software could not work as accurately on darker skinned individuals as it did on lighter skinned individuals.

Next, Buolamwini broke down the accuracy based on gender and skin tone. Microsoft’s and IBM’s algorithms had the highest accuracy on fair-skinned men. Microsoft even had a 100% accuracy rate, and the lowest, Face ++, was 99.2% accurate.

But then the programs revealed a more disturbing trend.

For women with darker skin, accuracy rates were up to 34% lower than for men with lighter skin. In fact, of the faces misgendered by Microsoft, 93.6% were dark skinned.

Buolamwini then studied the results in more specific tones using the Fitzpatrick Skin Type System and found that as skin tone darkened in women, accuracy was essentially a toss: about 50%.

Moreover, image identification AI can easily fall victim to harmful stereotypes in its image classification. A Study 2021 from Carnegie Mellon University and George Washington University have developed an approach to recognize biased associations between concepts such as race, gender, and occupation in image databases. They analyzed the results on two CV models: iGPT and SimCLR. In the Gender-Career test, which measures the association between gender and career attributes, men were matched with concepts like “office” or “business”, while women were matched with “children” and ” House “. These results reflected an incredibly strong bias.

The researchers found that both had statistically significant racial biases. When testing races for association with objects, iGPT and SimCLRv2 paired White with tools, while Black was paired with weapons. Both models found “Arab-Muslim” individuals to be “obnoxious” compared to “European Americans”, while iGPT found lighter skin tones to be more “pleasant”.

This idea of ​​lighter skin tones that are supposed to be more ‘nice’ has also come under scrutiny on many social media platforms and reflects a deeper problem with colorism in society. In 2017, the popular photo editing app FaceApp has been criticized for its “hot” filter – which claimed to make users “hotter” – by lightening the complexion. In other words, to make people look better, the AI ​​system made people look lighter.

Colourism has long been detrimental to BIPOC groups and still plays an active and destructive role in society today. Colorism is defined as a form of discrimination in which lighter-skinned individuals are treated more favorably than those with darker skin. Much of this discrimination arose out of ideas of white supremacy and Eurocentrism. Research suggests that if slavery were endemic in the United States, lighter-skinned slaves with distinctly “European” characteristics would be treated less harshly or receive slightly more “favorable” treatment (as if any treatment as a slave could be considered favorable).

One of the most infamous cases of this discrimination in the United States was the paper bag test. If a black person’s skin was darker than a paper bag, they would not be allowed to enter certain spaces or work; if their skin was lighter then these opportunities would magically open up to them. Over time, these notions of colourism permeated all aspects of American life, adversely affect employment prospects, mental health, legal proceedings, etc.

And AI perpetuates and continues these stereotypes and abuse.

So how can we solve this problem? How are we working to make computer vision more inclusive and less biased?

The answer lies in repairing databases.

The accuracy of machine learning-based AI depends entirely on the data it powers. If you feed a program with millions of turtle pictures, it will become very efficient at identifying turtle pictures. But if you show her a single picture of a snake, the model won’t know what it is.

This is what it is about the breed. Numerous image databases, including ImageNet, one of the most widely used image databases, are predominantly white and lighter skinned. When it comes to gender undertones, Buolamwini found that some datasets had over 85% light skin tone, in a world where billions of people have darker skin undertones. Simply put, our databases lack diversity, and artificial intelligence fails because of it. The current color scale used in AI, the Fitzpatrick skin type, was not even created to identify skin tone, but to rank which skin types are most prone to sunburn. This system greatly simplifies color, classifying shades into just six groups. Currently, Google and other groups are reworking skin classification software in hopes of refining the way computers see different races.

Today more than ever, we finally recognize the beauty and importance of diversity in our society. In the 1960s and 1970s we saw students to fight for ethics studies in universities. We see cultural parks like the San Pedro Creek Cultural Park celebrate a diverse heritage. And now the diversity of the workforce is at a absolute record in the USA

So why not bring this diversity to AI?


Source link

Comments are closed.