View Single Post
  #36   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default Compression vs High-Res Audio

"Audio Empire" wrote in message
...
One thing that's consistent with the "Everything-Sounds-The-Same" club is
the
notion that the Redbook CD standard (16-bit/44.1 KHz sampling rate) is so
good that going to 24-bits and either 96 KHz or 192 KHz sampling rate (or
SACD) makes no audible difference in music recordings.


One learns this fact if one does his homework well.

The flip side of this
rather incredible assertion (and just as incredible itself) is the claim,
by
many of these same people that MP3, AAC and other lossy compression
schemes
are, at the higher bit-rates, totally benign and invisible and that one
cannot hear any compression artifacts.


One also learns this fact if one does his homework well.

One who disagrees strongly with both of these views, apparently, is
"legendary" producer/ designer George Massenburg (Frank Sinatra, Linda
Rondstadt, Earth, Wind, and Fire, etc.).


At a presentation he gave at the recent Audio Engineering Society
Convention
held in London earlier this year, Massenburg wondered why, with bandwidth
so
plentiful, and storage so cheap why people still sell compressed music
online?


The reason is that bandwidth isn't all that plentiful in the real world, and
that storage is still limited.

I'm getting far worse average bandwidth from Comcast today than I got when I
first signed up over a decade ago. In the old days there were hardly any
people using Comcast's (actually it was their partially-owned subsidiary
@home in those days but they eventually forced @home out of business and
then bought them cheap).

I'm also trying to do far more ambitious things like download A/V files at
DVD-video quality.

For my recent backwoods camping trip I decided to replace my portable CD
with a Sansa Clip+. It only has 2 GB of built-in storage and I didn't have
time to get a 16 GB micro SDHC card to expand it. I had about 20 hours of
spoken word lectures and about 300 songs I wanted to have for my listening
pleasure and erudition when perched high on the cliffs overlooking Lake
Superior by Orphan Lake. What's a boy to do?


"These systems (compressed music formats) take something essential from
the
music, and lop it off. With so much bandwidth and memory now available,
the
question is not how to make a better Codec, but why we are bothering to
use
codecs at all..."


Massenberg seems to have a number of stories to tell on this topic, and not
all seem to be the same. The bottom line is what can Massenberg actually
show he can hear in a proper bias-controlled, statistically-significant
listening test. At times he's had the clarity to seem, to admit that in that
context, well 24/96 is not so much.

In his presentation, Massenburg showed where he took 24-bit/96 KHz
recordings
of Phil Collins and Diana Krall and ran them through different Codecs.
He
used MP3 at 128k bps, and AAC at 256k bps and showed the results on the
screen.
These graphics showed how the compression/expansion cycle destroyed the
dynamic range of the original recording.


Well, he must of screwed something up, or was doing more than he said he
did. Maybe you got some details wrong. I say that because comparing MP3 at
128 to AAC at 256 is not an interesting comparison. The MPEG group knew that
a decade ago and by their work showed that they knew that it is well-known
that AAC makes more efficient use of bandwidth, so the *interesting
comparison* is MP3 at 256 against AAC at 128. You must have your story
flipped around.

That all said nobody who knows what's going on uses 128k MP3 as a reference
format. 192k or 320k is more like it. So, anybody who says that MP3 at 128
has slight to noticeable audible effects depending on the program material
is spouting very old news. I know a roomful of people whose patience would
be tried by comments like those that you are reporting. Given that
Massenberg usually at least tries to be interesting, I'm going to say that
he probably said something else.


"These are standard systems and they are not good enough for us to use. By
coding the hell out of the music, and slashing the sound, we are missing a
market."


That's true if your reference standard for music *is* 128K MP3. Of course
that isn't currently the case.

Even in 2003 official Apple documents said that 160k is their standard for
MP3:

http://support.apple.com/kb/TA27396?viewlocale=en_US

Massenburg then used a demonstration to drive his point home. He
electronically subtracted the MP3 compressed music from the original
24-bit/96 KHz recording and then played ONLY the difference signal which
was
comprised solely of the information lost by the compression.


Only compleat idiots use signal subtraction as a standard for audible
changes. The problem here is that there are a lot of well-known idiots in
this world who seem to have more talking space than technical knowledge to
back it up. The problem with subtraction is that even minor and easily
demonstrated totally inaudible amounts of phase shift can lead to massive
difference signals when you do signal subtraction.

"These are distortion levels of 15 � 20 percent! ", he said as he played
the
difference signal for all to hear. The distortion amazed everyone in
attendance because it was a grotesquely, but very recognizable version of
the
original recoding!


This is so bad to me that it hurts my head when I read it. If you mismatch
the amplitude of two signals by 1 dB, the difference signal is 10%. Yet
neither signal need have any added nonlinear distortion at all. You just got
the levels a bit wrong. And this is aside from the inaudible phase shift
issue that I raised above. So now *the grat man* would be appear to be
talking trash on two levels. Ouch!

I gotta stop, this sort of gross technical ignorance in high places makes my
head hurt. Hopefully a verbatim report would be more reasonable.