Topic: Bit depth, noise, dynamic range
vjau wrote:That's the same thing.
Number of possible amplitudes = dynamic range.
If you don't agree think photography.No, no. Range indicates the lower and upper limits. Bit depth indicates the number of amplitude levels possible at a given instant. Very different things, aren't they?
EDIT: In the way that I would use the terms, in terms of photography, a low resolution black and white digital image might have a range from white to black, but only have ten gray-scale steps between them. A higher resolution image might have the same range from white to black, but have thousands of gray-scale steps between them.
vjau is actually right. The number of possible different values that can be present in a signal is exactly what dynamic range means. You are thinking of something different. I'll try to explain below.
Your original argument seemed to be that the main effect of a higher bit rate is an increase in dynamic range. I was saying that the more significant audible contribution was instead simply the number of amplitude levels simultaneously available.
I'll assume you mean bit depth and not rate. And yes, I said that: higher bit resolution means lower quantisation noise means more dynamic range, and I stand by that statement.
Concerning images: the theoretical maximum dynamic range of a 8bit JPEG image is (*drum roll*) 8 stops. Yes, that's just 8; not 10, not 12. In fact it is closer to 6-7 in practise, and many printers are unable to even transfer that to paper. This value corresponds nicely to the human eye, which (to my knowledge) has a static contrast range of just about 6.5 stops. In other words, there's a reason we call 8bit RGB images 'true colour' images.
On the other hand, modern cameras claim to have 'dynamic ranges' of 12 stops and more. So, how does this work? The simplified answer is that camera manufacturers misuse the word 'dynamic range' because it sounds impressive. They actually mean the absolute brightness range the camera captures and compresses into the image. 'Compression' is the important point here: a photo is always the result of (quite heavy) dynamic range compression. This means that yes, of course you can capture absolute contrasts of 12 EV into a single image. But local low-contrast details will still vanish.
Were an image a linear representation of brightness values (i.e., uncompressed), then the lowest relative brightness we could encode in 8 bits would be about -8EV. Local details would be theoretically visible anywhere if they have a contrast that is higher than -8EV relative to 'white' (i.e., the maximum brightness). In numbers: a local feature has to have a contrast difference of at least 1/255 of the white value in order to be visible. This is disregarding colour information, which we can (and do) use to improve contrast perception. Since images in practise are quite heavily compressed in terms of dynamics, the minimum contrast of a small detail has to be even higher than -8EV. This corresponds to the simple fact that dynamics compression decreases the dynamic range of the data, not increases it. Everything is squashed together, details get washed out.
So, why do camera manufacturers compress dynamics? Because our eyes do it as well (but we are better at it). While the static contrast range of the human eye is about 6.5 stops, it can adjust to different brightness ranges rather quickly: a few stops in about a second or less (by closing/opening the iris), and more if allowed to adjust for longer times. This is the dynamic contrast range of the eye, which is a lot higher than 6.5 stops. Which effectively means that we permanently look at different parts of the landscape, our eyes adjust accordingly, and our brain puts it all together. Because of this, images with about 10-12 stops of compressed brightness range actually look more natural. However, in the signal processing sense, their dynamic range is significantly lower.
By the way, there is also an equivalent to 24bit audio in photography: the 'raw' capture of most modern DSLR and other exchangeable-lens systems produces images with 12-16 bits. Not because those images are particularly 'nicer' to look at on their own, but mainly because they allow for significantly more headroom in terms of correcting/filtering the data.