A mastering question re: dither

1
I've often read that dither should only be applied once if possible.

However, it occured to me that while bouncing down in any app, bit reduction is occuring. Eg, in Logic you are going from 32 bit float to 24 bit fixed. Obviously, I dither down to 16 bit later post any mastering processes, but I've always ensured dither is switched off when making the initial bounce down to 24 bit.

So to summarise, my mixes are being truncated to 24 bit, before later being dithered down to 16 bit. My brain tells me I should also be dithering to 24 bit here too since bit reduction is occuring, but a lot of what I've read in the past suggests this is bad and that multiple layers of dither 'muddy' the sound. Since 24 bit is plenty resolution anyway, I've always figured it's best not to take the 'risk'.

Of course, what's going on at the arse end of a 24 bit signal is not likely to be making any particularly noticeable difference, so just using my ears here is pointless. I'm just interested in which is in theory the least destructive solution, regardless of whether it is actually detectable to the human ear: to dither twice (once down to 24 bit and once down to 16 bit) or to truncate down to 24 bit, dithering only once to 16 bit.

I understand a good deal about dither and bit depth, but there's a fair amount of woo out there on this topic so it would be much appreciated if anyone has a definitive answer on this.

A mastering question re: dither

2
I discussed dither not that long ago with a fairly knowledgeable engineer who pointed out that at 24-bits, dither becomes less necessary since the minimum noise level you could get out of a converter would sufficiently "dither itself" as less dither is required at higher bit-depths due to the finer resolution.

I might be so bold as to suggest that dither from 32-bit to 24-bit is unnecessary, although I'd like to see proof as to whether this is true or not.

I've often read that dither should only be applied once if possible.


True, or maybe "at least as possible" - when doing this particularly at lower bit-rates, you'll increase noise and therefore lose s/n ratio.

Perhaps Bob Weston might have some thoughts on the subject if he's about.

A mastering question re: dither

3
At this point, all interfaces output (playback) at 24 bits. So even if you're recording at 8 bit or 32 float, you're playing back at 24bit. Your forcing your CPU to do a buttload more work than it needs to. This is why I always suggest people to record at 24 bits, regardless of the apparent functionality of 32 bit float.

Dither once, when you are converting to 16bit for audio CD.
tmidgett wrote:
Steve is right.

Anyone who disagrees is wrong.

I'm not being sarcastic. I'm serious.

A mastering question re: dither

4
Jeremy wrote:Your forcing your CPU to do a buttload more work than it needs to.


A CPU isn't going to do any more work to process 32 bits per sample. Even the oldest X86 had a 32 bit execution unit. Now most of them have lots of parallel EUs in 32, 64, 128 bit pipelines. The only time more bits==more work is when you have to serialize them over a wire.

A mastering question re: dither

6
benversluis wrote:If you haven't already, read "Mastering Audio" by Bob Katz. I had to read this for my last class and it covers dithering very well.

Yes, excellent book.

On reflection, dithering any time there is bit reduction makes the most sense, and I was just getting mixed up by (as opposed to in a dither over) some poorly written mastering articles I'd read back in college years. Although of course, multiple bit reductions should be avoided where possible.
Why defend cunts?

A mastering question re: dither

8
happyandbored wrote:I'm just interested in which is in theory the least destructive solution, regardless of whether it is actually detectable to the human ear: to dither twice (once down to 24 bit and once down to 16 bit) or to truncate down to 24 bit, dithering only once to 16 bit.

In the end, you should do what sounds best to your ears for the particular situation.

I have both dithered and truncated 32bit processes down to 24bit on different occasions. I'd say most of the time I dither it. It is a pretty small difference, but noticeable.

t
trevor sadler
mastering and road racing
mastermind productions
mastermind motorsports
milwaukee, wi.

A mastering question re: dither

9
The way I understand it is that 24 bit can capture dynamic information down to 144 dB which is below the noise floor of even the best signal chain. As Rodabod suggested the noise level of any 24 bit record should essentially act in the same way as applying dither.

Bob Katz actually has some great resources on his website including this page specifically on dithering.

http://www.digido.com/bob-katz/dither.html

He gives his own opinion on the matter at the bottom of that article saying that, Dithering always sounds better than truncation without dither. But to avoid adding a veil to the sound, avoid cumulative dithering, in other words, multiple generations of any dither. Make sure that redithering to 24- or 16-bit is the one-time, final process in your project.

A mastering question re: dither

10
Dave Roy wrote:The way I understand it is that 24 bit can capture dynamic information down to 144 dB which is below the noise floor of even the best signal chain. As Rodabod suggested the noise level of any 24 bit record should essentially act in the same way as applying dither.Bob Katz actually has some great resources on his website including this page specifically on dithering.http://www.digido.com/bob-katz/dither.htmlHe gives his own opinion on the matter at the bottom of that article saying that, "Dithering always sounds better than truncation without dither. But to avoid adding a veil to the sound, avoid cumulative dithering, in other words, multiple generations of any dither. Make sure that redithering to 24- or 16-bit is the one-time, final process in your project."

This paragraph is a little contradictory though - he says dithering *always* sounds better than truncation (he makes no mention of which bit depth), implying you should always dither when the bit depth is being reduced. Then he goes on to say that it should be a one time process (at least that's one way of reading it). However, in the production process you are likely to have two stages where dither is necessary to avoid truncation. The first time to dither your mix down to a 24 or 32 bit premaster, and the second to dither your premaster to it's final 16 bit form.

My thinking now is that what he really means is to avoid cumulative and unnecessary dither when possible (ie, to avoid processing, dithering, then processing again, then redithering), not specifically that you should only ever dither once because obviously there are times where you need to dither multiple times at *concurrently lower* bit depths.
Why defend cunts?

Who is online

Users browsing this forum: No registered users and 0 guests