Cake
• It's a limitation, or rather, just the characteristic of how signal processing works in photosensors. I'm probably going to butcher this explanation. It was explained a long time ago with awesome visuals, that I've long since lost. It goes something like this...

When we clip highlights, what we see is the result of all the light wells (pixels) registering 1's. There's usually a pretty hard line cutting that off, because sensors exhibit strong clean signal strength going right up to its maximum capacity but will never register, for example, 1.001. But the black clipping is a softer number because it doesn't cut off sharply at 0.000. Before we get to 0.000 there is a bunch of line noise, that photogs think of as ISO, but engineers just think of as a increasingly muddy signal to noise ratio. Lifting the shadows amplifies the low signal and noise together, which means 1) there's lots of info down there that we can take advantage of...but 2) the more we rely on it, the more noise we introduce, which is why there's things look crappy if you try to lift extreme shadows w/o HDR techniques.

• Ah this is very interesting! Makes a great deal of sense that lifting the shadows lifts both signal and noise, leaving more information there than in the highlights.

• I ran across this on DPReview.

I found the difference between an 8-bit JPEG and a 12-bit RAW fascinating. Took a while to wrap my head around it. I thought a 12-bit would have 50% more colors than an 8-bit, but I was wrong. 8 bit is 2^8 values per channel, and with 4 channels, RGBA, that's 2^(8*4) = 4.2 billion combinations. But with 12 bit it is 2^(12 * 4) 2.8E14 (Is that 280 trillion??). So it's 66 thousand percent more information I think. That's crazy. Mind blown.

• The math is more complicated than that article presents, so I suggest not spending too much time on the exact numbers. Rather, your conclusion is what's important; that there is a "massive" amount of data in RAW files, compared to JPG 8-bit files (plus including other 8-bit file types similarly limited).

The other important point is that the "distribution" of image data is equally important, and all in-camera JPG files are produced from a data distribution algorithm defined by someone else, not you.

In short, while you have some limited control over the RAW capture data conversion to 8 bit by using different "picture styles", or whatever similar might be available for different cameras, even if you have custom styles defined they will be rather coarsely defined.

Significant too is that some of the highlight and shadow information is likely lost during the translation to 8 bits. Compare that to post-processed RAW files, generally processed in 16 bits and even 32 bits bits, and at your direction and discretion.

Sure, you may still down convert to 8 bit files for image file distribution and publishing, but up to that point you have much finer control in the process.

• Yeah, when I started running the numbers, I realized that was more so the design of the file structure. It's the upper limit to what can be stored, not a count of possibilities that the sensor can capture per pixel. That sounds complicated. I just found it interesting how a few more bits leads to exponentially more data.

• Your comments are on point. Just to add my thoughts and recapitulate what many here have said: The flexibility afforded by RAW files is massive. I see it in every shot I take on my d810. If you have highlights and shadows in your image, jpg loses significant information that is only "recoverable" in a distorted sense in post. The RAWs just have it all there.

I've got fast cards, a fast computer, and infinite storage through my work, so the extra processing time doesn't bother me. Especially since I started using multiple catalogues in Adobe Lightroom (my primary pp application)

• One important thing that I think most people don't fully understand is that most photos will be viewed as 8-bit sRGB jpegs on the web whether you record them as RAW or not. Either the camera software converts them or a program like Lightroom does.

In either case, I see a lot of photos converted from RAW to Adobe RGB, often in-camera. And then they have to be converted to sRGB with color loss, and people blame the jpeg format for the loss.

The thing is, sRGB is pretty good in the reds and yellows so it can provide fine gradations in skin tones. It's biased toward colors that occur in nature. And I think that's why we get away with it so well on the web and on phones.

But if the camera must convert from 12-bit color in RAWs to the selection of colors Adobe provides in their 8-bit color space, we have eliminated a lot of colors we could have represented in sRGB.

• Absolutely correct. To elaborate, both sRGB and Adobe RGB 8 bit files have exactly the same amount of color data. It's the "distribution" of the data that is different between those two file formats.

If you work in any of the programs which allow lossless editing from RAW files, and if you save the intermediate work in a format which preserves the original image data, you can still make any changes you want in a later post-production editing session.

If you just save the original RAW file for the image (don't discard the camera's original RAW file after an edit session), you might have to start over, but at least you have the original goods.

If you work with an editor like Lightroom or Phase One - Capture One Pro (my favorite), then just save the session in the application's native file format. Those applications don't affect the original RAW file in any way when you save the session; you are just saving the "directions" for working the image data. You can change the working color space at any time to sRGB, A-RGB or CYMK color spaces, for instance, and then output an 8-bit file (or 16-bit file) as needed for your use.

• Lightroom internally uses a variation of ProPhoto RGB in the development module. Soft proofing can be used to switch between ProPhoto RGB and the target color space if you want better control over your exported results.