16 or 24?
|
View this Thread in Original format
hey cheggy |
I was just curiuos to know who here works in 16 bit and who works in 24bit. |
|
|
Digital Aura |
just curious...
can you really tell the difference?
What is it exactly?:eek: |
|
|
Dj Thy |
If you are using unoptimized softsynths and effects you probably won't be able to tell much difference.
On the other hand, if you use optimized ones, and even more important, if you record from analog (microphones mainly) there is a flagrant difference.
What is the difference?
Well a little bit theory first.
You should know that an analog signal is infinitely defineable, at each moment, you have a certain value. Digital media (not only audio) as basically taking a sample (in the real sense of the word) on a discrete time. To preserve space (and cost mostly) the samples are limited too.
Nowadays, the most common bitdepths are 16 bit and 24 bit.
When you sample a signal, it's given a certain discrete value (usually according to the amplitude, the voltage of the signal). How much "steps" you have to define that value is given by the bitdepth.
When you have a 16 bit signal, it means you get 65536 steps to define the value of the sample. You can roughly estimate the dynamics of an analog signal you can sample by multiplying that number by 6 (I remind you it's a rough estimation, and it's also a best case figure). That's why you see stated on some 16 bit equipment it can handle 96 dB of dynamics.
Now if you sample at 24 bits, you get 16 777 216 steps to define your sample values. A lot huh? Multiply by 6, and you get 144 dB of dynamics. If you know the faintest signal a human can perceive is 0 dB SPL, and the pain threshold is usually situated at 120 dB SPL, you see 24 bit is pretty good for sampling every nuance.
And that's what's the main audible difference between 16 and 24 bit. 24 bit recordings "breathe" a lot more, and it's not only obvious on simple recorded material. Space (wether it is true real room ambience, or reverb FX, etc...) is much more defined.
All in all I would say, the difference between 16 bit and 24 bit is MUCH more obvious than the difference between 44.1 kHz and 96 kHz.
The only problem is, you need gear that is capable of working at 24 bits. No need using 24 bit signals if your soundcard can only handle 16. Nine times out of ten, your card will even refuse to play it. If your card accepts only 16 bit, there's no point recording in 24, it's just wasted space (exceptions of that are pseudo tape warmers like Cubase's truetape).
PS : some programs also use 32 bit floating point. That's mainly for dynamics/headroom stuff, and is mostly used internally. In fact it's also a 24 bit recording (so if you got a 24 bit capable card, it can use 32 bit float). But 8 of the 32 bits are used as an exponential. That way, different ranges can be addressed, which in theory gives you almost an unlimited headroom (so virtually impossible to clip). Some people don't like 32 bit float though, as they claim to hear artifacts.
PPS : tests have shown that even the best AD/DA convertors who claim 24 bit performance never really achieve it. A 24 bit convertor is very expensive to build and needs to be very accurate. The tests have shown that the very best convertors usually achieve around 21 bit performance maximum. Still it's about 126 dB of dynamics which is still very nice.
PPPS : again, my rant about overcompression of mixes. Compression was actually invented to cope with medium problems. A symphonic orchestra can easily have around 100 dB of dynamics. At the time, analog media like vinyl, could roughly take about 50-60 dB, hence the need of compression. Nowadays, new technologies allow us to have media that have bigger and bigger headroom, so much it's even bigger as the headroom of our ears. Yet, we are compressing the final more and more. Isn't it ironic? Just listen to some commercials on the radio or on TV. Some of the worst cases barely have 1 or 2 dB of dynamics (a faint breath sounds as loud as the explosion happening 30 seconds later)... Think about it. |
|
|
Digital Aura |
didnt mean to hijack your thread Chegster!
but... if what DJ THY sez is true, then why is the M-audio Audiophile card called 24/96 and not 16/96 or 24/144.
quote: | All in all I would say, the difference between 16 bit and 24 bit is MUCH more obvious than the difference between 44.1 kHz and 96 kHz. |
This is the point Im unclear about. Your argument seems to be in reference to 44.1 vs. 96kHz ... NOT.. about 16 vs. 24 bit.
I fail to see the difference. |
|
|
Dj Thy |
Ok, second part of the explanation.
Why the 24/96, the 96 has nothing to do with the dynamics. The 96 here means 96 kHz, which is a sampling rate (a frequency).
Sampling rate and bit depth are two different things...
I told the bitrate says how much steps you get to define a sample value.
I also told, that when you digitise a signal, you take discrete samples at discrete times. Well, how much times a second you take a sample, that is defined by the sampling rate.
A Hertz defines a cycle per second. So when you see 96 kHz, it means you take 96000 samples per second. But why so much?
Well there are two guys names Shannon and Nyquist, that found out something particular. When you want to sample a signal, the samplerate needs to be AT LEAST twice the highest frequency that is contained in your original signal, if you want to be able to reproduce that right. You probably already heard that humans can hear up to 20 kHz. So following the Nyquist theorema, if you want to sample a 20 kHz signal, you need a sample rate of at least 40 kHz. I won't explain it all in detail to you, cuz that would take way too much time and space (I'll post a link on the end of my reply). Just know, that if you fail to do that, you get what is called aliasing.
Ok, any normal person would ask, what the hell is the use of samplerates like 96 kHz? A very good question indeed. If you follow Nyquist, a 96 kHz samplerate allows you to sample signals up to 48 kHz accurately. We only hear up to 20 kHz, so what's up with that?
Well, that's mainly because of cutoff filters that are required in a convertor circuit. To make sure that no frequencies higher than half the samplerate get in the converter, they add an antialiasing filter (a lowcut filter). At the output step of the circuit another lowcut is added (as the output of a digital circuit is usually a pulse signal, and we need to get a smooth analog signal, the lowcut "rounds off" the steep pulse).
The problem now is, if you use, let's say 44.1 kHz, you need a steep filter to remove the unwanted frequencies. Steep filters usually induce lots of distortion (phase distortion mainly).
If we use higher samplerates, we can use smoother filters, which will result in better sound. That's the main difference, not because we sample more accurate content (as a a matter of fact, that same test I talked of in my previous reply showed that very high samplerates were usually less accurate, as again, making circuits that switch so fast are very difficult to make). Oversampling uses the same principle (well pure theoretically 96 kHz sampling is oversampling too).
On another note, even if a "illegal" harmonics could pass, with high sample rates, the intermodulation results (the aliasing) would be in an inaudible (for humans) range, which is another plus.
The biggest downside is storage space. A 96 kHz recording will take up more than double the space of a 44.1 kHz at the same bitdepth. As I said before, the difference between 44.1 and 96 is not so obvious (most people don't even hear it), while the bitdepth difference is certainly obvious. So unless you can afford the extra storage space (and cpu overhead), do go for 24 bit, but stay at 44.1 or 48 kHz.
Like I said, it will sound chinese to most of you, and that is because I needed to simplify a lot. Understanding the full workings of convertors needs a lot of studying, and is usually beyond the scope of a "simple" producer. It's always nice to know a little about the things you are using every day, no?
Some links to clarify Nyquist and aliasing :
http://www.cs.ust.hk/faculty/layers...ms/nyquist.html (use the arrows, and you'll find audio examples of aliasing)
http://csunix1.lvc.edu/~snyder/2ch11.html |
|
|
Chris Creator |
CD standard is 16 bit 44.1 kHz. So thats what I use. |
|
|
Digital Aura |
cool . thx THY |
|
|
Dj Thy |
Oh yeah, forgot one important thing too. Dithering.
If you reduce bit depth (for example as the final step to bring it to CD standard and burn it on cd) you need to dither the signal. Only when reducing bitdepth, remember that. Dithering a 16 bit signal when the end result is 16 bit, will result in more noise.
Why? Well, when you reduce bit depth without any specific processing, the digital data will simply be truncated (for example 24 -> 16, you'll simply lose the information of the 8 lowest bits). This results in a very audible type of noise.
Again, I'll simplify a lot : dithering is adding noise in a controlled matter. This noise will allow the signal detection when reducing bitdepth to be more "precise". The end result will be kind of a noisier signal, but the noise dither adds is more pleasing to the ear as the noise resulting from not dithering.
Of course, over the years, dithering has been improved. A very important advancement in dither is noise shaping. This simply ensures that the noise you add is shifted to the inaudible range, but still has the same average energy.
For a very good explanation, http://www.izotope.com/products/audio/ozone/guides.html
Izotope put up an excellent guide, with audio examples and all, and I suggest you read that.
PS : on another note, know that a 24 bit recording, being processed in 24 bits (or 32 float), and brought to 16 bit at the very end (with dithering), will still sound better (90% of the time) than the same stuff recorded at 16 bit, and kept that way.
PPS : when I said don't dither a signal that remains at the same bitdepth, it's not always a fact. Most programs work in higher bitdepths internally. Even if you use a 16 bit sample, as soon as you do some processing (that could even be as simple as moving a fader in the mixer window) the resolution will increase. It's very to see when you got a bitmeter like in Samplitude. Try doing some basic processing and you'll see the bitmeter will indicate more than 16 bits are used. If you want to keep that extra info, dithering might be an option. The difficulty is in the fact that not all softs increase the bitdepth on processing... |
|
|
paranoik0 |
quote: | Originally posted by Dj Thy
PPS : when I said don't dither a signal that remains at the same bitdepth, it's not always a fact. Most programs work in higher bitdepths internally. Even if you use a 16 bit sample, as soon as you do some processing (that could even be as simple as moving a fader in the mixer window) the resolution will increase. It's very to see when you got a bitmeter like in Samplitude. Try doing some basic processing and you'll see the bitmeter will indicate more than 16 bits are used. If you want to keep that extra info, dithering might be an option. The difficulty is in the fact that not all softs increase the bitdepth on processing... |
so, quick question, should we use dithering when exporting a track in fruity, 16bit? i always had it on but never knew what it was for |
|
|
hey cheggy |
Thy, you have WAY too much information in your head. An interesting read, cheers. |
|
|
ravan |
It would be interesting to see a list of what softsynths are optimized for 24 bit!
I mostly use 16, but thats just because pretty much all samples and so on I have is 16 bit. |
|
|
|
|