LMS & RGB colour models
The LMS colour model is used to represent the response of the three types of cones of the human eye to different visual stimuli, that is, different wavelengths of light.
The strength of the LMS colour model is its concern for the connection between the physiological aspects of vision and the everyday visual experience of an observer.
LMS (long, medium, and short) refers to the band of wavelengths that each cone type within the retina responds to. The LMS model is particularly useful when comparing an observer’s response to the appearance of colour samples.
The RGB colour model and derivatives such as the HSB colour model, build on the knowledge gained from research into the response of the human eye to trichromatic stimuli and so has foundations in LMS’s concern with the biological processing of colours within the human eye.
Design and imaging software such as Adobe Creative Colour rarely refer to LMS and doesn’t include LMS-related tools but it underpins colour models such as the RGB, HSB and CMY colour models.
The functionality of the RGB colour model is rooted in the parallel notion that information about any colour whatsoever can be described in terms of the amount of red, green and blue it contains.
When using decimal notation to record the amounts of R, G and B needed to produce any observable colour, R, G and B are each assigned a value between 0 and 255. In the case of the output of this information to a computer screen, each value is sent along a separate low-voltage electrical circuit with a direct correspondence between the intended colour value and amperage ( low amps start at zero, high amps max at 255). As the amps change on each channel so does the brightness of a red, green or blue light-emitting diode (LED’s) embedded in each circuit.
Digital screens work on the basis of sets of three diodes grouped into pixels embedded into the screen. A typical 4K screen might contain 2160 x 3840 pixels so a total of 8.29 million. The colour of each pixel is hardwired into the technology so the wavelength of light emitted by each diode is fixed. It is therefore simply the relationship between how dark a screen appears when pixels are off and the fine control over the intensity of light they are able to produce that determines the range of colours that can be produced in the eyes of the beholder. The total range of colours is often referred to as a gamut.
Different styles of notation are used to record, store and transmit calibrated colour information depending on the type and purpose of the colour model in use. Adobe Creative Cloud software, for example, used one system for RGB values and another for HSB.
RGB colour values are typically represented by decimal triplets (base 10) or hexadecimal triplets (base 16) and always ordered red, then green, then blue.
Decimal numbers from 0 to 255 are selected for each value. Zero means fully off and 255 means maximum intensity:
- R=255, G=00, B=00 or 255,00,00: Red
- R=255, G=255, B=00 or 255,255,00: Yellow
- R=00, G=255, B=00 or 00, 255,00: Green
- R=00, G=255, B=255 or 00,255,255: Cyan
- R=00, G=00, B=255 or 00,00,255: Blue
- R=255, G=00, B=255: 255,00,,255: Magenta
In hexadecimal notation, the same triplets appear as follows. Hexadecimal numbers range from 00 to FF:
- #FF 00 00: Red
- #FF FF 00: Yellow
- #00 FF 00: Green
- #00 FF FF: Cyan
- #00 00 FF: Blue
- #FF 00 FF: Magenta
The hash symbol (#) indicates hex notation.
The sequence of hexadecimal values between 0 and 15 = 0,1, 2, 3, 4, 5, 6, 7, 8, 9, A, B, C, D, E and F.
The next sequence of hexadecimal values between 16 and 31 = 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 1A, 1B, 1C, 1D, 1E and 1F.
As a footnote, it is worth recalling that as we sit engrossed in the unfolding narrative of a movie on a digital display the flow of images is produced by a matrix of pixels each of which is driven by a succession of tristimulus values that typically refresh 240 times a second. 8.29 million pixels updating 240 times per second means we are actually watching around 250 million flickering colours every second. This moving image is then focused by the lens in each eye and produces a perfect miniature replica on the retinal surface of each eye. As the 6 million photosensitive cone cells per eye react and phototransduction takes place When an observer looks at the screen, the image is focused on the retina at the back of the eyeball by the lens to enable the S, M, and L cone cell types to capture as much information as possible. As we know, cones are also organised into the biological equivalent of a matrix so that responses to each pixel within the image that has been projected onto the retina can be accurately mapped and each pixel-point of light arranged in its proper place within the living experience of the observer.
It is the challenges of finding ways to determine the tristimulus values needed to produce the appropriate stimuli to enable an observer to see or choose a particular colour, or range of colours, in different viewing conditions that have resulted in the development of competing colour systems each with its own colour model and associated colour spaces. The equipment and devices we use every day are critically judged on the quality of the images they produce.
It is these same challenges that have led to the development of colour profiles that ensure, for example, that the relationship between the original colours seen by an observer taking a photograph and colours that appear after processing are reproduced as intended. Colour profiles ensure that as information is encoded and passes from device to device, every colour within a range of colours (a gamut) appears as expected when output by a printer regardless of its make or model.
It is worth noting that there is no standard set of primary light sources or pigments sufficient to enable an observer to see all colours within the gamut of human vision in every situation. As a result, the choice of light sources and pigments is adjusted to suit different technologies, media and viewing conditions.