View Single Post
  #24   Report Post  
watch king
 
Posts: n/a
Default Comments about Blind Testing

First there seems (from the emails I've received directly) to be some
misunderstanding about my statements concerning blind testing. I
think blind testing is the best way to make determinations about
whether one audio product is superior in the circumstances where
products either measure identically or if their various distortions
and specifications differe in ways that would not immediately
demonstrate the superiority of one item or items. The only
considerations are that the tests be designed to be equal for each
item tested, that the listeners would not know except from listening
which item is being used at any moment, that the test be repeatable
and that the order of the items being tested should change at random
moments so that (for example) if the first set of tests has one item
going first 1st, 3rd, 6th and 8th, then during the next round with
the same program material (and everything else) the test have that
same item going 2nd, 4th, 5th and 7th.

Now a comment on the idea of the "trained listener" that is being
bandied about. What does this mean? In my experience professional
recording engineers are trained listeners. They really aren't any
better at listening testing than any other listeners and in some
circumstances they can have an incredible bias that makes them
useless as test listeners. In the circumstance that a recording
engineer has played a piece of music he is intimately familiar with,
through his own studio monitors, his preferences afterwards if that
recording is ever played is worthless. I've been to the former JVC
Cutting Studio many times. The place was full of well trained
professional recording engineers and they listened to hundreds of
different master recordings (both digital and analog) through their
own mastering monitors. They would be poor listening test subjects.
It would be no different for engineers who listened mostly on UREI
Time Aligns(Altec 604s), or any number of other JBL, Tannoy, Fostex,
Yamaha, Westlake or other studio monitors. These listeners just have
such huge built in listening biases that it would be difficult for
them to be objective about the "total quality" of one audio product
VS another in a blind listening test unless their loudspeakers and
facilites were used. These loudspeakers and facilites might never be
able to demonstrate a variety of audio characteristics.

I have worked with dozens of other professionally trained listeners
and engineers who also had enormous biases about what constitutes
natural sound. Even Disney Imagineers may be highly biased, even
though their goal is supposed to be designing sound systems (or
purchasing premade systems) based on the best sonic quality or most
natural sound (along with reliability, sufficient output to do the
job, and other non-sonic quality considerations). And all Imagineers
are professionally trained listeners. It's true that some Imagineers
also reseachr the best of the esoteric audio world but not many.
Still I'd say that Disney Imagineers are generally more interested in
every and any device that could enhance sonic quality. But Imagineers
would not be much better as test listeners than any novice listener
that really had a broad interest in music, spoken word, natural
sounds or cinema.

So does the reference to "trained listeners" imply that there is some
other expertise these listeners need to have that would make them
better or different from the normal recording engineer's skill at
listening to specific spectra or perhaps goes beyond the trained
engineer's long years of experiencing the sound of real instruments
and voices? If so what is this quality that makes these "trained
listeners" better than professional audio engineers? I've never seen
it (or heard it). Even when I was selling esoteric audio at retail I
never met a "golden ear" who had anything over the trained
professional recording engineers I met later in my career.

Even the most highly "respected" golden earred listeners of
audiophile equipment usually only seemed to have their own specific
biases that was supported by their own little group of followers.
These "audiophiles" could usually argue well, and were superb at
justifying why the faults of the products they labeled superior were
less important than the faults of the products they labeled inferior.
Generally I met almost no audiophiles whose sonic judgement I would
trust more than many trained professional recording engineers. The
best of each group could be possibly very focused listeners, very
fussy listeners or there are even some I like to label "savant"
listeners. Some have big egos and some don't. But there is not one
trained listener I've met who would be the "best" listening test
subject. Some trained listeners can make faster determinations about
why one product sounds better than another compared to most people,
but the decisions they make are usually very much parallel to those
made by untrained listeners as long as the test is properly designed
to be fair and meaningful.

I've often discussed audio in the halls of CES with amp designers of
products like Krell, B&K, Bryston, Carver, Threshold, Conrad Johnson,
Quad and Lazarus and not only have these people been willing to lend
their amps to me for use in my displays, I've had many other
designers come into my displays to listen and discuss audio. Amp
designers listen allot because they want to be sure their products
don't have "bad reactions" to all the different loudspeakers on the
market. These are electronics engineers with thousands of hours of
focused listening. Perhaps they are self-trained but they are often
very skilled at hearing tiny differences between products. Again I've
noticed that even this group of trained listeners might hear quality
differences faster, but they don't hear them any better. So I don't
understand this emphasis on trained listeners being some kind of
especially capable listener who can hear things other listeners
can't. In fact it usually works the other way around .

Trained listeners who don't get a constant diet of nearfield
experience with live music often tend to be more "in the box" as
opposed to being open-minded. Sometimes it is the sad result of ego
forcing them to always choose a product whose "sound" is one they
recognize or have publicly stated is superior. If not for gigantic
egos unwilling to be openminded, there would likely only be 25 or 30
loudspeaker makers instead of hundreds. Also if it were a certainty
that any particular listener was better than others, the best
financed companies would hire them to compared that company's
products to everything else on the market. But this isn't the case no
matter how many "golden earred" listeners claim they hear better than
anyone else.

In fact normal listeners are just as good at determining which audio
product sounds better than others (if there are any differences to
hear). The 21 year old European novice recording engineer who
demonstrated this when he was able to "hear" 98% of the time when a
23khz filter was inserted into the AES program material was more like
an untrained listener. No other trained listeners could do it more
than 40% of the time. Remember this was a filter-in/filter-out test
with button pushed & released to match when the change in sonic
character occurred. Even more important to remember was this was a
test to determine whether people could hear a "difference" sonically
when there was a definite electrical difference. Maybe the engineer
"liked" the filtered program better than the unfiltered program, even
though the unfiltered material would measure better for
phase/squarewave response, noise, and the total amount of distortion
in the "audible" program (Thus explaining Quad fans).

ANY listener who is careful, focused and has some sonic reference
points remembered from hearing live voices and instruments can be a
good listening test subject. A good listening test particiapant is
something completely different from knowing allot about audio and
music in general. This is what seems to confuse most people and it is
something people with allot of general audio and music knowledge want
to deny because they want to believe they are special golden earred
listeners because they know allot about audio or music. A good test
listener has to be able to hear differences in sound only if they are
there and they have to be able to make judgements about which item
sounds better than the other if there really is a difference. It's
that simple.

Human beings are a group of interactive sensors and measuring
instruments, tied into a large central processor that not only makes
quality judgements based on direct measurement but also based on each
individual's history, temprement and education. Normally if there is
a difference in sonic quality it can be measured or quantified.
Sometimes measuring product A's 9 non-linearities vs product B's 9
different non-linearities makes it difficult to determine which
item's overall sonic quality is superior to the other's. Properly
arranged listening tests just narrow the focus of these quality
judgements so they can be more easily identified.

If the purpose of an audio playback system is to fool the ear/brain
combo into believing the listener has been transported to some space
where a live performance or sound exists, then there can be tests to
see how close or far away that goal is by testing one product in the
chain at a time. Let's be careful that we hear real quality
differences. Sometimes there can be two distoritons present, one
irritating and one masking. If a product removes the masking
distorion then perhaps the irritating distorion becomes more audible.
This is a real possibility. But usually differences are singular, and
other times listening tests point out which of two different
distortions is the most irritating. But training beyond an hour or
two to warm up test listeners doen't really make a difference in
whether anyone can hear sonic quality differences. Healthy ears,
focused minds, and a personal liking for the sounds they are hearing
usually makes the best listeners for tests. People who always have to
be right, those who have been too acoustically imprinted by previous
listening of certain program material or those who have a large
amount of image and ego invested in the tests' outcomes usually make
the worst listeneing subjects.

Now for some of my opinons about consumer magazines that have product
"tests". The electronic tests may be helpful or fun information
especially when features are described. The desire to please
potential advertisers make most magazines focus on saying something
nice about products sonically. The fact that most magazines will not
point out real sonic problems is a diservice to readers. No magazine
I've ever read does real blind listening tests to even choose their
best and most recommended products. About the best I can say is that
if a reader finds after comparing a few dozen product auditions that
any reviewer has done, with their own auditions of audio products and
then finds themselves always in agreement with that particular
reviewer, then reading that reviewer's articles about products could
save time. Otherwise I find most magazines are either driven by
advertising and thus could care less about consumers, or they are
driven by the need to suck up to or be courted by manufacturers to
get equipment and favors or special treatment at shows and thus could
care less about consumers.

A few of these product "reviewers" can be funny or edgy writers, and
reading their reviewes can be interesting. This doesn't mean they can
hear anything of any special importance that others can't hear but
maybe their writing is readable. But beware of the priorities of most
reviewers. I visited many magazine editors and test "facilities" in
both the audiophile and mid-fi markets. I've often asked about this
or that product to get opinions when I saw the product was at the
magazine for a review. I was often astounded to find that products
which sounded great and were very reliable could be panned while
other products that were less reliable or sounded worse would get
glowing reviews. When asking for specifics, I was told that the knob
on one device wasn't what the reviewer thought was right, or the
connector on another device was not up to snuff. This attitude had a
disasterous effect in some cases.

In the 60s a barrier strip was the normal way speakers were connected
to receivers and integrated amps. A spade lug or better yet a
circular lug could usually be mounted on a speaker wire and a very
solid connection could be made. Magazines later touted the use of
push connectors because consumers rated them more convenient. What a
terrible move. Push contact connectors had a much smaller conductive
surface and barrier strip connectors could have much more secure
connections. The switch to push/insert connectors probably hurt the
sound quality of systems as much as almost any other change in audio
up to the switching power supply fiasco. On the audiophile end, few
reviewers for high end publications have much knowledge of room
acoutics and acoustic treatments. Some of the listening rooms I saw
were abominable. I understand why there seemed to be no relationship
between the reviews I read in audiophile magazines and my actual
auditions of the same products. This kind of inconsistancy makes it
difficult or impossible for companies making really good products to
get consistantly good reviews and develop audience confidence (which
was why McIntosh originally decided not offer their products for
review in most cases when they were ascending the audio heap).

So while I am certain from experience that double (or triple or
quadruple etc.) blind listening tests can best determine #1) if there
are differences between products and #2) whether differences make one
product better than another, I don't think these tests should utilize
any particular level of trained listener, as long as the listeners
are interested, open-minded and careful. Comparing audio to cars or
politics seems a bit far afield, but a good comparison might be to
compare audio to 3D video imaging. It is still difficult to measure
3D animations to know which animation might look most realistic and
current animations fall short of reality in multiple ways.

But we still test computers, monitors and storage devices to be used
in 3D animation production or viewing. Images are getting better but
not all the program material is of the highest quality either
visually or thematically. Equipment is also getting better.
Eventually it may be possible to make 3D animation which can fool the
eye/brain combo the way we try to fool the ear/brain combo with
audio. Our eyes' ability to see immediately whether something is
animation or real on a 2 D screen is much like determining "reality"
with audio program and playback material. Once in a great while we
will see a piece of animation that looks authentically "real', and
I've had the chance to hear some materials that were able to create
"ear/brain fooling" sonic quality. It doesn't happen often and seems
to be more rare now in audio than it was in the 70s and 80s, while
animation is still moving forward towards "reality". Perhaps this is
because in 1973 dollars there is less money being spent on audio now
than then. Perhaps audio will become more important again when 3D
animation and video get so good that it is always easy to fool the
eye/brain combo. We can always hope. Watchking

We don't get enough sand in our glass