View Single Post
  #30   Report Post  
Posted to rec.audio.high-end
Arny Krueger[_4_] Arny Krueger[_4_] is offline
external usenet poster
 
Posts: 854
Default Can mp3 quality be improved?

"Doug McDonald" wrote in message
...
On 12/13/2011 1:20 PM, Dick Pierce wrote:


Strictly speaking, techniques such as dBx and Dolby A, Dolby
B and such, are all data-reduction compression techniques.
Their purpose is to attempt to fit as much of the "important"
data into a naroowed-bandwidth channel, be it a transmission
channel or a cassette tape. They all work on the smae principle:
they (physically) discard information which, in the eyes of the
designer, are deemed "insignificant."


I disagree with that. If the ideas behind any of those three systems work
perfectly, as they are supposed to, then they do not discard information.


I agree with the idea that these methodologies discard information that is
less audible in order to better preserve information that is more audible.

Their basic principle is as simple as the old adage that "There is no such
thing as a free lunch." In fact Dolby A, B, and C do sacrifice elements of
technical accuracy that are not so audible such as gain tracking in order to
obtain improved noise performance where it is audible.

The recording media they are used with do of course cover up
information with noise, but those three systems do in fact reduce
the amount lost.


They reduce the loss of audible accuracy by sacrificing forms of accuracy
that are less audible.

By working perfectly I mean that the encode and decode cycles
are the exact inverse of each other.


That is just it. None of the Dolby systems offer perfectly accurate encode
and decode cycles. Pick the right technical and audible test material and
they fall full flat on their pretty little faces.

Consider the simplest, dBx. It (the wideband one) is a simple
volume changing scheme, with the encode and decode systems being
feedback systems that are, if no sounds are out of frequency range
of the recording medium, exact inverses.


It doesn't happen that way on the test bench.

Working properly, so that so that noting between the encoder and decoder
overloads, the only thing that happens is that the noise floor at the
output goes up and down. If the noise floor of the transmission medium is
much larger than half the dynamic range of the
input signal, the output should be essentially an exact copy of the input.
If the transmission system has a much larger noise than that, then the the
output will, at low levels, be a much better
copy of the input than if the encode-decode cycle were not used.


There are time constants in the encode and decode process that cause perfect
transient reproduction to suffer. Furthermore, the recording medium is
itself highly nonlinear, and in general distortion rises rapidly as levels
increase. Therefore, any effort to avoid noise by raising recording levels
adds more nonlinear distoriton. The thing is that the increase in nonlinear
distortion is usually less objectionable to the ear than the noise.

In practice, of course, none of these are every truly perfectly
tuned and effects other than noise level changes will be there to some
degree. If badly mistuned, they can be huge.


Even if "perfectly tuned" there are a number of inherent sources of
inaccuracy.

Look at it this way, there is Dolby A and there is Dolby B. Does Dolby A
have any audible advantages over Dobly B? Of course it does, admittedly at a
cost in terms of added equipment complexity and more demanding setup.

Therefore, Dolby B fails to provide as accurate encode/decode cycle
performance as Dolby A, and we have falsified any claim that the encode and
decode cycles of Dolby B are just as exact inverses of each other as are the
encode and decode cycles of Dolby A. Your argument is falsified by the fact
that both Dolby A and B exist!