Sure memory plays a role but the DAW sends you doubles anyway (it's beee really a long time since I used floats and I'm not sure DAWs still use the processReplacing with floats) and even then we are talking about max 8192 doubles, even if you do hi precision FFTs you rarely go above 32768 samples / bin and even in that case we are talking about 2MB of memory...CPUs are powerful but memory accesses aren't. As long as all data is in registers or on cache things are fine, which is very often the case in audio. If the algorithm is memory "intensive" then using floats may help.the only case when it can make sense is to optimise performance with vector units since you can process 4 floats or 2 doubles at once but IMHO CPUs are so powerful that it's simpler to keep everything at maximum resolution and don't think about dithering at all (unless you are a bad person and simply truncate double to floats)
Double to float conversions uses rounding by default on the FPU, not truncation. Float to double is lossless.
Converting a double to float loses resolution on the mantissa, so it's lossy the first time.
As I see it, there is no reason to not transport the audio buffers in single precision. I mean, not so long ago there were 24 bit fixed point systems.
For which purpose should someone dither a floating point sampling scheme with a logarithmic "grid" with much more precision towards zero values? A single precision floating point value has a dynamic range of 1528dB (total, from 1 to epsilon it might be less, assume 760+dB). Also every mantissa value of a normalized nonzero represented value has also the highest (virtual) bit 23 set, so what is the point of adding noise on bit0 to bit-1?
Regarding the rest I agree but honestly since I've been using only doubles for so many times I really did not care/ remember al that technical stuff.
Saverio
Statistics: Posted by HoRNet — Wed Jul 24, 2024 10:04 am