There’s a lot of color spaces all around for many things. It’s almost trendy, to the point some camera manufacturers create new ones with new cameras. You may wonder why, well that’s what this article is about.
Note: you’re expected to know the basics of color science for what we’re discussing here. You may
The Gamut part
First there’s the camera file container. If the camera sensor uses a Bayer filter or some equivalent technology, the file you obtain is a single layer matrix, like a black and white picture, and the color information can be extracted by a process called demosaicing that will create a RGB file, which means a layer for Red information, a layer for Green and one for Blue. The actual “physical” color of each of those primary colors can be more or less saturated or bright. There’s ton of articles to describe that, one of my favorites is this one.
So that’s during demosaicing that you decide what “colors” the camera outputs, by interpolating the values of your picture and put them on a scale for R, G and B from black to fully saturated, knowing the physical colors captured by the sensor based on the chemicals of the filter. So depending on your color pipeline for post, you may want to pick a gamut or another: a standard one, like ITU-Rec BT2020, or one that is specific to your camera, like the S.gamut3 shown on the picture above. If you pick a gamut that is too small, you may see artifacts as the processing will either clip the signal or create weird things, as can be seen when photographing light sources such as LEDs.
So why don’t we just pick the largest one?
Well, it could be that simple, but it’s not. First there’s the camera capabilities, that may be for some technology reasons better on some parts of the visual locus, so the engineering optimises the primaries so all colors the camera can capture can be recorded.
Then there’s storage issues: depending on the dynamic range of the signal, 10 bit might not be enough for encoding, so you would need 12, but cannot afford for some reasons the extra storage, or the extra precision processing, for example when recording directly Quicktime or MXF files in the camera. XYZ or ACES AP0 encoding is great because you don’t clip anything but you loose a lot of code values that are never used.
A little comparison to explain that, say that you have miles and kilometers to calculate distances, and that you can do only integers. With 1023 values, you describe a larger distance (larger colorspace), but you are less precise in discerning small differences: if you have to describe 2.2 km, the approximation in finer with kilometers than it is in miles. In color words, that would mean less precision where you need it, for example on skin tones. So if you want to have the same precision describing a larger space, you will need more bits. This is also relevant for another kind of transport, over cables: if only 8 bits go through your display port or HDMI cable, you’d rather avoid large gamuts.
In the case of CG, for a long time there was no real concept of gamut, so pretty much everything was either conscious sRGB, or unconscious sRGB. Now the recommended practice would be to go ACES, where you have properly managed gamut.
The transfer function part
That’s also a big topic. Here also we’re talking about efficiency, as you don’t want to waste too many code values where you don’t need them, where the eye is not so sensitive, in the case of content encoded for direct viewing with no or little color correction; if this content is meant for postproduction, you’d rather have more info on the whole dynamic range, from black to full white as you may distort significantly the signal to get what you want. You may choose whether you want to just add “a little extra” with some log-ish curve that will keep some info in super white and super-blacks, or go for fully linear encoding, which brings some good stuff, like not having to compute gamma. We’ll discuss that later.
Feel free to comment in the dedicated Rockflowers Q&A channel