Increasing bit depth in Photoshop – myths and truths

It is widely accepted that there are benefits to working in 16-bit rather than 8-bit mode for some – repeat some – editing tasks. Inevitably, there are some incurable nerds who say it’s important to work in 16-bit mode for everything, but of course that’s not true. 8-bit is perfectly fine for most people, most of the time.

If you’re not familiar with bit depth, read this article.

Two scenarios come to mind where 16-bit mode is helpful. First, those times when you unavoidably need to make aggressive adjustments to your photo in Photoshop, rather than in raw. And second, when there is risk of banding in your photo (eg a smooth backdrop, or a blue sky).

In this article I wish to discuss the concept of converting 8-bit data up to 16-bit for editing. I’m sorry to say there’s almost no benefit to this. Yes, Photoshop does allow you to convert your 8-bit file to 16-bit, but it doesn’t truly turn it into 16-bit data. It just puts the 8-bit data into a 16-bit wrapper, if you know what I mean.

16bit.gif

As a simple analogy, pouring a small glass of water into a bigger glass doesn’t give you more water – it just gives you a partially-filled big glass.

Here’s a better one – let’s say you are arranging a conference in a small conference room that seats 80 people. But it seems your conference is going to be enormously popular, so you announce you’re changing the venue to a huge hall that’s big enough for 1600 people. Trouble is, you still only have 80 chairs, which you space out across the hall in a feeble attempt to "fill" the space. So your conference still only has 80 attendees, but now they’re all sitting many metres apart from each other, and the whole thing is completely pointless.

Ok, that’s still not a wonderful analogy, but I hope you see what I’m getting at. The futility of the exercise. The simple truth is, if you want 16-bit data, you have to start with 16-bit data – that is, by exporting in 16-bit mode from Raw.

Off the top of my head, I can only think of one scenario where converting an 8-bit photo to 16-bit would be advantageous, and it’s not very common. Let’s say you had shot in Jpeg mode (Jpegs are always 8-bit) and blown out the sky, so you wanted to add a gentle blue vignette layer to the white sky. If you were nervous about banding in the sky, you could convert to 16-bit before adding the vignette layer. The rest of the photo would still (in reality) be 8-bit, but the sky gradient would be true 16-bit.

But seriously, the above method isn’t necessary. Simply adding some noise to an 8-bit gradient for a fake sky is perfectly adequate.

So … forget it. If you really want or need 16-bit, shoot raw and process it in 16-bit, the way everyone else does.

(Actually, not even raw data is true 16-bit in most cases. Most cameras’ raw data is actually 10, 12 or 14-bit, which Photoshop puts into that 16-bit "wrapper" that I mentioned earlier. But if you’re familiar with mathematics, you’ll know that even a 10-bit file has a hell of a lot more data than an 8-bit file.)

I want to finish this page on a note of practicality – 8-bit data is nowhere near as flaky as you might have been lead to believe. I’ve done "terrible things" to 8-bit Jpegs (edited them really aggressively) and they turned out fine.

 


If you have a question about this article, please feel free to post it in Ask Damien.