32bit vs. 16bit
|
View this Thread in Original format
StephenWiley |
Am I the only one who is noticing a very palpable difference in the two in terms of quality? The 32bit sounds so much smoother and seems like it brings out frequencies better. I was mastering a tune the other day, and I did a 16bit vs. the original 32bit and the 32bit blew it away. Why in the world isn't 32bit a standard yet?! |
|
|
orTofønChiLd |
well i think when you record a file from an external instrument with a higher bit rate means your going to get more. I'm curious to the answer to this question also, should be a standard. |
|
|
StephenWiley |
I completely agree. With music being so competitive these days, from DJs, to producers, to the software engineers, I can't imagine how 32bit was over looked.
Right now its tough to do 32bit if you're using a lot of loops as those are generally just 16, but if you can create a track with pure midi and vsts then I think you will notice a huge difference in 16bit vs. 32bit
Anybody want to do this? Post a track they made in 16bit and in 32bit - you can even mp3 them. The 32bit is clearer better even when compressed. |
|
|
Subtle |
If you really wanna check this mp3 theory of yours you should post a clip of an mp3 rendered from 16 bit vs one rendered from 32 bit or something like that.
That must clear it up pretty fast.
The Digiman can probably shed some light on this :p |
|
|
Beyer |
32bit audio is just not worth the bother.. As long as you make sure you don´t clip your master output, 16 bit will do just fine.
24bit vs.16bit is a better discussion though, but it has been debated to death. Search on
gearslutz or other forums, and you´ll have loads to read up on.
I use 16 bit for most things, and record 24 bit from my keyboard. Just for good measure. ;) |
|
|
orTofønChiLd |
another a/b test with waveform analysis :p
oh wait thomas penton has his sample pack at 24 bit, yes i will go with that :rolleyes:
edit-wait are we talking about recording in higher bit rate or processing in high bit rate |
|
|
DigiNut |
Well, actually, 32-bit is the standard. Almost every sequencer uses 32-bit floating point internally.
So you must be referring to one of two things:
a) Recording;
b) Final mixdowns.
Let me address (a) first. The redbook audio standard is 16 bits, 44 kHz, and this was based on a very extensive study to determine what it would take to make recordings with effectively zero audible noise - and this includes a pretty good "safety margin". The noise floor with 16/44 is far lower than what any recording studio could ever hope to achieve in terms of ambient noise.
The standard doesn't really take into account the possibility of mixing 20 or 30 of these recordings together. If you take all of your synth and other tracks in a production, and bounce them all to 16-bit audio, it is definitely possible that you'll get some noise. That is why most producers actually do render intermediate audio as 32-bit float, which is the same as what the sequencer itself does.
So high-end production equipment mostly switched to 24-bit recording. Most ADCs and DACs are 24-bit. In theory, a 24-bit recording allows producers to mix 30, 40, 50 of these recorded tracks together and still have the pristine output quality of a typical 16-bit one-track recording of a band. In practice, again, the conversion noise is generally dwarfed by ambient noise, but I guess every little bit of noise reduction helps.
32-bit converters exist but they are pretty rare. And here's the kicker: a 32-bit DAC is going to be 32-bit fixed point. When you convert that to 32-bit floating point in a sequencer, you are actually going to start losing fidelity. If you've ever seen a service bill from some rinky-dink web company stating that you owe $32.000000007 in charges, that is an example of what happens when you try to convert from fixed to floating point and then back again. So even if you could get your hands on a 32-bit ADC, it wouldn't end up much better than 24-bit. Sequencers would have to switch to doubles (64-bit float) in order to take advantage of this.
So on the recording end, to summarize: the current standard is 24-bit, not 16-bit, so using 16 bits as a reference point for comparison is flawed. But even then, I'd be surprised if you could really hear the difference unless you're mixing a huge number of tracks all at 16 bits.
Now onto (b) the final mixdown:
One of the things you need to understand is that DACs are fixed-point. You can't actually take a floating-point signal and convert it directly to analog. Even if you found a DAC that claimed to do it - and I've never heard of such a thing - it would still have to convert to fixed-point internally first, which would require many more bits of precision than the original floating-point signal to do accurately.
I'm willing to bet that you do hear a difference when you do your final mixdown in 32 bit float, but not for the reason you think. Your sound card/Audio Interface, in all likelihood, has a 24-bit DAC on it (or maybe even 16-bit if it's cheap). If your sound driver even allows you to play 32-bit float audio directly, then it's getting down-converted somewhere in the chain, either by the driver itself or by the AI internally. And it's probably using some cheap, fast dithering algorithm, or maybe even doing straight truncation.
So what you're actually hearing when you try to work with 32-bit float is not higher fidelity, but lower fidelity. It's a combination of conversion loss and aliasing distortion. This is why we have dithering algorithms like the UV22-HR - complicated algorithms that obviously can't prevent artifacts, but create the artifacts in just such a way that they're virtually impossible to hear.
Even doing your mixdown in 24-bit fixed point is a bad idea (unless your destination is a mastering studio). It may sound better on your system, but you're going to get the same sort of crappy dithering/truncation if you encode as MP3 (which is 16 bits) or mix it onto a CD or vinyl that gets played on some much lower-fidelity consumer or club system.
Hope that answers your question. For mixdowns, stay away from 32 bits. Stick to 24 if it's headed to a studio, or 16 if your target is physical media. For recordings, they are likely happening at 24 bits at the hardware level, but are already getting converted into 32-bit float when you record from within the sequencer. |
|
|
orTofønChiLd |
yeah there you go that all sums it up |
|
|
DJ RANN |
Good points but really, accounting for the amount of music out there and what format it is in, 16bit is THE standard, without question.
Even though most hosts operate at 32bit internally, it really only matters in the case of production what the actual project's bit depth rate is.
There is also a big difference between "data" at 32bit and audio being recorded at 32bit (and yes, I know audio is a form of data once in the digital domain but hopefully you get what I mean). i.e. A 32bit OS is not the same as your audio recording/project being 32bit.
Again, even though ADC/DAC are often 24bit the point has to be made that the bit depth is only ever going to be as high (er...deep?) as the weakest link in the chain, so if you're project is at 16bit but you're using a 24 bit DAC, there will be a difference, but only in what the converter adds during the conversion process (in terms of interpolation, artifacts etc).
On another note, Recordings, should be made at as high a bit rate as possible - if sample rate is resolution on a horizontal plane then we know that bit rate is resolution on the vertical plane. However, the extra size in terms of data that a 32bit recording incurs is genrally accepted to not be worth the small increase in quality. So 24bit is generally a smaller increase in data size with a relatively larger increase in quality. Also, baring in mind that all links in the chain during production are 24bit (which is really not hard to do in a home studio with decent equipment), you would only suffer quality loss during a final bounce down to lower standard such as CD.
For these reasons, I keep my projects at 24bit until rendering down. You're forgetting one issue which is the playback quality during the mixdown to even recording stage which at a higher bit depth will subjectively allow you hear more, and therefore do a better job, hence there is a clear advantage over working in at lower quality standards.
I don't want to get in to the bit depth/sample rate conversion nightmare threads again, but from what I have experienced and used, it works and sounds best all things considered to keep the process at the highest possible quality rates (sample rate and bit depth) until the final conversion down to target media. |
|
|
EddieZilker |
Great posts, Diginut and DJ RANN. |
|
|
DigiNut |
quote: | Originally posted by DJ RANN
For these reasons, I keep my projects at 24bit until rendering down. |
When you say you keep your projects at 24 bit, what exactly do you mean?
Sequencers store and/or calculate each sample of every signal or track in a single-precision float, which is 32 bits long. You don't have the option of changing this; it's just the way they work. You'd have to be talking about either the mixdown or default recording settings when you say you keep projects at 24 bit.
Assuming you're talking about recording (that's what the setting means in Cubase), it is completely fine to set that at 24 bits in most cases. But I'd caution people not to do that, or to override the setting, in two instances:
1) When bouncing the output of a virtual instrument. Unlike recording from hardware, i.e. via ADAT, VSTs and AUs are already putting out 32-bit words. If you bounce them to a 24-bit wave, you are losing fidelity and will probably get aliasing distortion.
2) If you plan to do much processing on the audio stream itself (i.e. before any effects, filters, etc. are added to the track). Every offline process on a fixed-point digital signal is going to degrade the signal slightly. If you do 10 or 20 of them in a row - and some of us really do this much processing - you'll actually be able to hear the artifacts at some point.
Majority of producers probably never do either of these things, so leaving the default recording settings at 24 bits is good enough. |
|
|
wrzonance |
quote: | Originally posted by DigiNut
When I reply to a thread I reply intelligently |
Either you're pretending to be thoughtful, or you're the only reason I still lurk the production studio (and occasionaly reply).
Seriously... if you get hit by a car, I'll delete the tranceaddict forums from my Opera browser's speed dial altogether.
Anyway. DNut is right.... and in my opinion it's all about the mix down. Audio-land is strange (in the technical aspects of it). While the wheat-chaf metaphor applies directly to talent--- it functions in the opposite sense in terms of original mix to final product. You start with the highest possible quality and then chunk it down to CD quality (16bit/44.1kHz).... which eventually becomes a e MP3 anyway.
Oh and btw, if you think 32bit sounds nice. Try using a good VST in 32bit float 96kHz! It's orgasmicly crisp........... but it just ends up sounding like e anyway.
One day audio standards will evolve, hopefully, but preliminary things like this lol/concern me: http://radar.oreilly.com/2009/03/th...d-of-music.html
Either way, I don't write/produce in 96kHz--- 88.2kHz is the max for me (although if I had a nice Digidesign rack... 192kHz).......
It all ends up at 16bit/44.1kHz anyway. But I'm glad you have experienced the bliss that is higher bit depths and sample rates.
---Adam Wrzeski |
|
|
|
|