Door Identification -- Color Filtering

Eyal Amir and Pedrito Maynard-Reid II

The input to the algorithm is three intensity images corresponding to the RGB signals. The algorithm first filters out of the image all the pixels whose color is sufficiently far from a specified basis color, adjusted for some lighting conditions.

The algorithm first shrinks the image by half (the images we processed are typically of the dimensions 1000x1200), then converts it to HSV space. To find those pixels that have colors close to that of the door, we have pre-selected Hue and Saturation values. These values are used to filter out pixels whose Hue or Saturation values are too different. The Value image is used to adjust those prespecified Hue and Saturation values for light reflection. The following two figures show an example input and output of this process.

   
Figure 1: Original Image (0.25*size)
Figure 2: Filtered B/W Image (0.5*size)

Several problems are eminent in color-based filtering. These problems are rooted in the possible (and likely) variance of colors due to light conditions. Shading, the fact that the door surfaces are not Lambertian, the angle and distance of the door to the camera and the light sources, the fact that there are different possible light sources (sun/flourescent) and different reflections from other nearby objects (and the ground) are all possible interferences.

The first technical problem is encountered if one wishes to filter using RGB coordinates. Due to different lighting conditions, the RGB values of doors may vary significantly from image to image and even over the same door in the same image. Experiments show that the Blue image is very distinctive for the Gates building oak-brown doors (which have extremely small--often 0--values for the Blue plane), but we decided to avoid using it because it is not completely reliable and is too ad-hoc since it works only for the oak-brown.

There are many other different color coordinate systems, one of the most famous of which is the Hue-Saturation-Value (HSV) space. Roughly speaking, in this space the Hue is supposed to identify the color, the Saturation identifies the amount of "white" in the color, and Value stands for the amount of light (intensity). We decided to use HSV mostly because it was readily available (in XV [J. Bradley 1994]) and easily computed in MATLAB (there is a specific function), but other systems exist.

The second problem with filtering is the existance of different lumination conditions. Using HSV helps us to some degree here, but cannot solve the problem by itself. Although some of the influence of lumination is taken away by the Value plane, the radiance of the light source has some color which influences the Hue of the door (which varies slightly independently) and the intensity of lumina has significant influence on the Saturation.

Experiments show that, for almost every image, one may find HSV values and threshold values such that the filtering process works well for that image. Unfortunately, for our application (robot vision) one cannot calibrate these values for every image off-line, so our algorithm must be able to adjust them automatically. We do such automatic adjustment using the Value plane. The intuition is that since the color saturation varies inversely proportionally to the irradiance for a particular spot on the door, subtracting a function (we experimentally chose adjustS=-(imgV*imgV/4)) of the Value plane from the Saturation plane should yield compensation that will work across different lighting conditions. For the Hue, we found that the flourescent light inside the building has a small influence, which is corrected by adding a small fraction of the Value plane (imgV/20).

These corrections are not always accurate and, indeed, there are some cases where they fail (see the experiments section). Although the proportionality is as described above, the functions are not linear. There is, perhaps, a function that will be able to compensate well for the different lighting conditions (especially for the Saturation plane), but we did not have time to pursue this direction in more depth. Also, different light sources have different influence. Our system performs very well (with the automatic adjustment) for the indoor light scene, but it does not behave good enough on sun-lit rooms (the Hue plane is unaffected, but the saturation varies significantly). The best example for this failure is the following in-room image:

   
Figure 3: Original Image (0.25*size)
Figure 4: Filtered B/W Image (0.5*size)

For this image some constant thresholds worked well, but the automatic adjustment (with the functions we chose) did not. Solutions for this problem may include finding yet a better function of Value and Saturation, distinguishing between different positions of the robot (the robot may know that it is in a room and may select a different function accordingly), adjusting the thresholds using different cues (such as the color of the walls, if the view angle is wide enough), or to try different coordinate systems (such as the CIELAB system) which are normalized for color matching characteristics of human vision.

Other improvements, not related to the adjustment for the door color, are possible. For example, one may identify different colors, such as that of the floor, the walls or the door panels and subtract these bitmaps out. One potential solution to some of the noise from the carpet/ground is to first smooth the color images in the first stage of the algorithm, then subtract all the points with similar color characteristics as the carpet.

Next: Object extraction

Return to Table of Contents


Eyal Amir and Pedrito Maynard-Reid II.
Last modified: Tue Mar 16 18:54:52 PST 1999