Up to now, the discussion has been focused on the how solid-state imagers produce a representation of the intensity field present in an image only. But intensity or brightness is only one attribute associated with how we perceive the sensation of “color.” Two other attributes complete the picture: hue and saturation. The hue of an object represents the wavelength of light that our vision system sees when it is focused on the object. Hue is commonly referred to the object’s color; that is, red, orange or green, etc. Saturation represents the amount of gray light contained in the color relative to its hue. Saturation is commonly referred to the color’s strength or purity.
Unfortunately, image sensors that can detect hue, saturation, and intensity directly do not exist. But in 1960, physiologists found that the peak light absorption characteristics of the three types of cones (color sensing components) in the human eye nearly correspond to the red, green, and blue wavelength regions of the visible light spectrum. So, it is no coincidence that when you mix red, green, and blue primary colors in the proportions, you can produce millions of new colors. This “tristimulus” color model can be seen in nearly every device used with colors (computer monitors and printers, for example). Because the output signal of a CCD or CMOS image sensor is proportional to the incident light intensity, you can make a simple color image sensor using tristimulus color theory. In practice, this has been done either by using three image sensors, each having its own spectral response characteristics (the three-chip design), or by using a single sensor and a color filter array (the single-chip design).
Figure 4. Three-CCD Beam Splitter Optics
In a typical three-chip design, light that enters the camera housing is first split into three beams and then focused onto three CCD arrays, each of which has a color filter laminated to the light sensitive side of the array. This method has been very popular, because the spectral response function characteristics of the color filters can be designed to simulate those of the human eye, thus enabling the camera to behave like the color vision model presented earlier. The benefits of this design include simple signal processing, very accurate color reproduction and control, and high resolution. However, cameras that employ this design tend to be expensive and large in size, because of the added electronics associated with the three sensors. But more importantly, they typically suffer from poor low-light sensitivity due to the lower signal levels obtained from beam splitting.
Single-Chip Color Sensors
In a typical single-chip design, a carefully selected color filter array (CFA), is laminated directly to the light-sensitive side of a CCD image sensor. More complicated electronics and interpolation algorithms are required in this design, but these cameras are typically much smaller, less expensive, and have better low-light sensitivity than three-chip designs. The main drawbacks of the single-chip design are lower resolution and less accurate color reproduction and control.
Color Filtering Techniques
The CFA is typically constructed as a repeating sequence of three colors (that is, red, green, and blue), each having its own spectral response functions. The sequences are arranged to form either a horizontal or vertical stripe pattern or a checkerboard pattern. Stripe filter patterns typically require less complicated signal processing electronics than those having checkerboard patterns and they will produce images with superior horizontal or vertical sharpness (Parulski, 1985). However, some checkerboard patterns can give better overall performance by taking advantage of the added geometric flexibility inherent in checkerboard patterns (Bayer, 1976).