Color Vision Deficiency
Color vision deficiencies can appear in a variety of forms. Over 1% of the overall population has some type of color deficiency, although men are far more likely than women to have a color vision issue. In fact, it is very rare for a woman to have the red-green color deficiency (see Wikipedia's Color vision deficiency for more information).
To understand this deficiency, consider the following images:
The image on the left contains the number "45" represented as shades of colors. The image on the right is a straight-forward converted grayscale image. Those people with a certain red-green deficiency will see the image on the left as it appears on the right (if you don't see a difference between these two images, please check your monitor or your eyes). The image on the left is a plate from the Ishihara Color Test (see Wikipedia's Ishihara color test for more information.)
This page explores the test, and how we could use this knowledge to create images that are more clearly visible to those that have these deficiencies.
Ishihara Color Test
The Ishihara Color Test is a test for red-green color deficiencies. It was named after its designer, Dr. Shinobu Ishihara, a professor at the University of Tokyo, who first published his tests in 1917.
From Wikipedia: "The test consists of a number of colored plates containing a circle of dots randomized in color and size. Within the randomized pattern are dots which form a number visible to those with normal color vision and invisible, or difficult to see, for those with a red-green color vision defect. The full test consists of thirty-eight plates, but the existence of a deficiency is usually clear after a few plates. Testing the first 24 plates gives a more accurate diagnosis of the level of severity one's color vision defect may be."
"Common plates include a circle of dots in shades of green and light blues with a figure differentiated in shades of brown or a circle of dots in shades of red, orange and yellow with a figure in shades of green; the first testing for protanopia and the second for deuteranopia."
First, let's look at converting images to "scales of gray" or grayscale for short. Here is a Python function that will go through all of the pixels in an image, get the pixel's color as a grayscale value, and set the pixel:
def color2gray(picture): for pixel in getPixels(picture): x, y = getX(pixel), getY(pixel) gray = getGray(picture, x, y) setGray(picture, x, y, gray)
You can save the color image on this page as "Color_vision_orig.jpg" and use the following code:
from myro import * picture = makePicture("Color_vision_orig.jpg") show(picture) color2gray(picture) show(picture)
That should convert the left image into the right:
As we see in Chapter 9 of the textbook (p. 224), the conversion from color to gray is actually very simple. An image appears in grayscale when the red, green, and blue components are the same. One could create a (bad) grayscale by simply setting the other two components to the same value as the third. For example:
def color2grayBad(picture): for pixel in getPixels(picture): red, green, blue = getRGB(pixel) setRGB(pixel, (blue, blue, blue))
This conversion is bad, as it simple throws out the red and green components of the image, creating a grayscale which is only based on the blue component. This would make a normal photograph look very strange. Try it!
A better method of converting an image to grayscale is to balance the red, green, and blue components. One way to do that is just to average them for each pixel. You could do that with the following:
def color2grayAve(picture): for pixel in getPixels(picture): red, green, blue = getRGB(pixel) gray = (red + green + blue) / 3 setRGB(pixel, (gray, gray, gray))
That produces a much better grayscale for photographs. Artists and scientists argue that the human eye doesn't see reds, greens, and blues equally well, so a better grayscale image might adjust the weights of each component from being equal to being more heavily weighted on green. There isn't a correct set of weightings, and each person has their own preference to match their sensors (eyes). Here is a common weighting:
def color2grayBetter(picture): for pixel in getPixels(picture): red, green, blue = getRGB(pixel) gray = red * 0.30 + green * 0.59 + blue * 0.11 setRGB(pixel, (gray, gray, gray))
What are your preferences? Try it! Make sure that the weightings add up to 1. Why?
How could any of this help a color bind person see the difference between shades of red and green? One could weight the reds and greens very different. For example, we could throw out the reds, like so:
def color2grayBad(picture): for pixel in getPixels(picture): red, green, blue = getRGB(pixel) gray = (green + blue) / 2 setRGB(pixel, (gray, gray, gray))
But, that, again would make a photograph look terrible, and doesn't help to see red things.
We now know that red-green color blindness is a problem because shades of red and green look the same to a person with such a deficiency. Another solution is to make a grayscale image based on brightness of a color, rather than on the color itself. In order to do this, we are going to switch from the red-green-blue colorspace to one based on hue, saturation, and brightness.
Need more detail here!
The idea here is to go through the image twice: the first time to compute the brightness of each pixel and compute the range of brightness, and the second time to scale the brightness to the maximum possible.
def color2grayWithCorrection(picture): minimum = 100.0 maximum = -100.0 total = 0.0 # save gray value for later: gray = [[0 for y in range(getHeight(picture))] for x in range(getWidth(picture))] for pixel in getPixels(picture): red, green, blue = getRGB(pixel) hue, saturation, brightness = rgb2hsv(red, green, blue) g = 0 if saturation == 0.0: g = 1.5 * brightness else: g = float(rightness + brightness * saturation) minimum = min(g, minimum) maximum = max(g, maximum) total += g x,y = getX(pixel), getY(pixel) gray[x][y] = g mean = total / (getWidth(picture) * getHeight(picture)) minimum = 0.0 maximum = (mean + maximum) * 0.5 for pixel in getPixels(picture): x,y = getX(pixel), getY(pixel) # Correction: lightness = 0.95 * 255.0 * (gray[x][y] - minimum) / (maximum - minimum) # No correction: #lightness = 255.0 * (grey[x][y] - minimum) / (maximum - minimum) lightness = int(max(min(lightness, 255.0), 0.0)) setRGB(pixel, (lightness, lightness, lightness))
This will create the following image.
You can make adjustments in the algorithm to create more enhanced images. Try it with some of the pictures from Wikipedia.
- http://moinmo.in/FeatureRequests/EnhanceTransclusionSyntaxForImages - details on building a wiki for those that have vision deficiencies.