View Single Post
  #141   Report Post  
Posted to rec.audio.high-end
Sebastian Kaliszewski Sebastian Kaliszewski is offline
external usenet poster
 
Posts: 82
Default Compression vs High-Res Audio

glenbadd wrote:
On Oct 19, 1:11=A0am, Sebastian Kaliszewski
wrote:
Well, the software to perform such test is AFAIR available for free. One
needs 24/192 recording, reduce it down to 16/44.1, upsample back to
24/192[*] (the information lost will to be restored) and comprare
original with processed audio using some ABX software.


OK, I'm able to try that. Should I expect all the content above
22050Hz
to be missing from the upsampled 24/192?


Yes, after upsampling from 16/44.1 back to 24/192 no original high
frequency information would be there. With good[*] upsampling algorithm
there would be very little newly introduced information (i.e. noise).
With bad algorithm there could be a lot of (introduced) noise up there.

And below 22050 Hz the
waveforms and spectrums should appear exactly the same ?
(compared to the 24/192 original)


Very close, but not exactly the same, as filtering at the cutoff is
performed and it's not 100% brickwall[**]. And with the difference that
low level (below 16bit) information is lost.

Don't worry -- most todays DAC upsample signal -- the do becaus that's a
bettre way to do it as analog reconstruction filteriong (on the analog
signal side of the DAC) is much simpler when band for filter slope is 4
times wider than passband instead if being just 1/10 width of the passband.

What happens to the original 24 bit information that is below the
16 bit dynamic range of 16/44.1 ?


Generally speaking it is lost. What one could do is to use noise shaping
while reducing bit depth and then in the most important band (0.5-6KHz)
dynamic range could be improved somewhat (about 10dB) at the cost of
reduced dynamic range at the extremes of the band.


The whole point of the excercise is to determine if that information
loss is audible. While you're doing that you could also try experiments
with intermediate caseses, i.e. 24/44.1 and 16/192 (to distinguish bit
depth effects vs sample rate effects)

rgds
\SK

[*] - It's possible to devise upsampling algorithm which adds specially
crafted data above the cutoff frequency -- similar to some photoediting
software does while zooming. But this data is still a distortion it's
only designed to please people perceiving it (i.e. "eugraphic" in case
of photos or euphonic i case of audio distortion). It's not the original
data, it's just a fake, a guesstimate how that original data could
more-or-less (rather less than more) look. Original data *is* still *lost*.
[**] - It's theoretically posible to perform excat 100% brickwall in
digital domain but it's costly and that cost highly depends on exact
proportion of starting and ensing frequency. In case of 24/192 to
16/44.1 reduction memory need is 147 times (+ some for calculation data)
the memory needed for original 24/192 data.
For 10s stereo piece at 24/192 there are about 100billion operations to
perform and memory requirement is about 2.2GB.
For 15min track the requirements are two order of magnitude higher
(about 10 trillion operations on about 200GB memory set -- i.e.
unfeasible on todays computers available for general use).
Yet in case of 24/192 to 16/48 it's much better as memory need is just 4
times not 147 times more (as I said -- it strongly depends on exact
proportion of sampling frequencies involved).

--
"Never underestimate the power of human stupidity" -- L. Lang
--
http://www.tajga.org -- (some photos from my travels)