Reply
 
Thread Tools Display Modes
  #1   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

For anyone who has slogged through the current thread on CD sound and wonde=
rs where the evidence really lies, here=92s a brief summary. There have bee=
n numerous published DBTs of CD players and DACs, and the bottom line of th=
e results agrees with the accepted theory of psychoacoustics experts: there=
is no audible difference among conventionally designed products. The very =
rare differences that have been found can be explained by the unusual desig=
ns in question.

Published DBTs begin with the article cited in the other thread:

Masters, Ian G. and Clark, D. L., "Do All CD Players Sound the Same?", Ster=
eo Review, pp.50-57 (January 1986)

If memory serves, they did find one CD player that was audibly distinguisha=
ble from the others. I believe it was an early 14-bit model from Philips.

Two later tests also appeared in SR:

Pohlmann, Ken C., "6 Top CD Players: Can You Hear the Difference?", Stereo =
Review, pp.76-84 (December 1988)

Pohlmann, Ken C., "The New CD Players, Can You Hear the Difference?", Stere=
o Review, pp.60-67 (October 1990)

Both tests found no differences among players.

The Sensible Sound did a two-part report on another test:

CD Player Comparison, The Sensible Sound, # 75, Jun/Jul 1999.=20

CD Player Comparison, The Sensible Sound, # 74, Apr/May 1999.

My understanding is that they did not identify the actual players being tes=
ted, except for the cheapest one, which was a sub-$100 carousel model. Agai=
n, no differences were found.

A group in Spain has posted results of numerous tests it has done. A full l=
ist of tests is here, unfortunately in Spanish:

http://www.matrixhifi.com/marco.htm

(click on Pruebas Ciegas to see the list)

Most of their tests found no audible differences. (See, for example, their =
comparison of a Benchmark DAC to a Pioneer DVD player.) Devices that did so=
und different:

1) a non-oversampling DAC
2) a device with a tubed output stage
3) a portable Sony Discman, connected via its headphone output

Two further points:

1) No quantity of DBTs can prove a negative. But believers in CD/DAC sound =
can cite no comparable empirical evidence whatsoever for their position.

2) Psychoacoustics researchers have reached the same conclusion via other m=
eans. Here=92s a standard textbook in the field:

Moore, BCJ. An Introduction to the Psychology of Hearing, Fourth Edition. S=
an Diego: Academic Press, 1997.

And here=92s what Dr. Moore had to say about the issue:

=93CD and DAT players generally have a specification which is far better th=
an that of other components in a hi-fi system, especially cassette decks an=
d loudspeakers. Essentially, the output signal which they provide is indist=
inguishable from that which would be obtained from the master tape produced=
by the recording studio (studio recordings are now usually digital recordi=
ngs). Thus, provided a CD or DAT player is working according to specificati=
on, it will produce no noticeable degradation in sound quality. It follows =
from this that most CD players and DAT players sound the same.=94

That is all.

bob
  #2   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Sunday, December 9, 2012 7:59:31 PM UTC-8, wrote:
For anyone who has slogged through the current thread on CD sound and won=

ders where the evidence really lies, here=92s a brief summary. There have b=
een numerous published DBTs of CD players and DACs, and the bottom line of =
the results agrees with the accepted theory of psychoacoustics experts: the=
re is no audible difference among conventionally designed products. The ver=
y rare differences that have been found can be explained by the unusual des=
igns in question.
=20
=20
=20
Published DBTs begin with the article cited in the other thread:
=20
=20
=20
Masters, Ian G. and Clark, D. L., "Do All CD Players Sound the Same?", St=

ereo Review, pp.50-57 (January 1986)
=20
=20
=20
If memory serves, they did find one CD player that was audibly distinguis=

hable from the others. I believe it was an early 14-bit model from Philips.
=20
=20
=20
Two later tests also appeared in SR:
=20
=20
=20
Pohlmann, Ken C., "6 Top CD Players: Can You Hear the Difference?", Stere=

o Review, pp.76-84 (December 1988)
=20
=20
=20
Pohlmann, Ken C., "The New CD Players, Can You Hear the Difference?", Ste=

reo Review, pp.60-67 (October 1990)
=20
=20
=20
Both tests found no differences among players.
=20
=20
=20
The Sensible Sound did a two-part report on another test:
=20
=20
=20
CD Player Comparison, The Sensible Sound, # 75, Jun/Jul 1999.=20
=20
=20
=20
CD Player Comparison, The Sensible Sound, # 74, Apr/May 1999.
=20
=20
=20
My understanding is that they did not identify the actual players being t=

ested, except for the cheapest one, which was a sub-$100 carousel model. Ag=
ain, no differences were found.
=20
=20
=20
A group in Spain has posted results of numerous tests it has done. A full=

list of tests is here, unfortunately in Spanish:
=20
=20
=20
http://www.matrixhifi.com/marco.htm
=20
=20
=20
(click on Pruebas Ciegas to see the list)
=20
=20
=20
Most of their tests found no audible differences. (See, for example, thei=

r comparison of a Benchmark DAC to a Pioneer DVD player.) Devices that did =
sound different:
=20
=20
=20
1) a non-oversampling DAC
=20
2) a device with a tubed output stage
=20
3) a portable Sony Discman, connected via its headphone output
=20
=20
=20
Two further points:
=20
=20
=20
1) No quantity of DBTs can prove a negative. But believers in CD/DAC soun=

d can cite no comparable empirical evidence whatsoever for their position.
=20
=20
=20
2) Psychoacoustics researchers have reached the same conclusion via other=

means. Here=92s a standard textbook in the field:
=20
=20
=20
Moore, BCJ. An Introduction to the Psychology of Hearing, Fourth Edition.=

San Diego: Academic Press, 1997.
=20
=20
=20
And here=92s what Dr. Moore had to say about the issue:
=20
=20
=20
=93CD and DAT players generally have a specification which is far better =

than that of other components in a hi-fi system, especially cassette decks =
and loudspeakers. Essentially, the output signal which they provide is indi=
stinguishable from that which would be obtained from the master tape produc=
ed by the recording studio (studio recordings are now usually digital recor=
dings). Thus, provided a CD or DAT player is working according to specifica=
tion, it will produce no noticeable degradation in sound quality. It follow=
s from this that most CD players and DAT players sound the same.=94
=20
=20
=20
That is all.
=20
=20
=20
bob


The SR reviews are suspect due to SR's editorial policy which was=20
that everything printed in SR must serve the advertisers/potential
advertisers. That meant no critical evaluations of anything. Ever=20
wonder why SR never published a negative review from Julian
Hirsch? Because it was SR policy to not publish negative reviews.
That didn't mean that Julian never came across a piece of equipment
that didn't meet its public specs. It simply meant that SR didn't run
the review, that's all. You see, it was their editorial policy to cater
to the industry, not the consumer. It is because of this policy that
the late J. Gordon Holt founded Stereophile. His stint at High-Fidelity=20
Magazine (and I believe that he also worked at SR for a time too)
convinced him that these magazines weren't serving the interest=20
of the consumer. That's also why that no one should be surprised
that SR's tests on the audibility of components, including CD players,
show no differences in audible performance. It's also where the joke
"quote" from Julian Hirsch goes like this: "of all the amplifiers that I
have ever tested, this was one of them" That "quote" applies to=20
tuners, CD decks, preamps, receivers, you name it. And no, Julian=20
never really said that, but if you read the sum-total of his work,=20
including going back to "Hirsch-Houck" labs before Julian went off
on his own, you will see that he never had an opinion. He just=20
measured the equipment against its published specs, and if it met
them, it was good for go. If not, that fact was never mentioned (as
far as I know and I subscribed to SR for decades!) and the reviews
were not published. The fact that to SR, everything sounded the same
was sacrosanct. I don't wonder that all of those "DBTs" showed no=20
difference in CD players.

I won't comment on the Sensible Sound tests because I've only seen
a couple of issues of that magazine and don't know what their=20
editorial policy was.=20

As for the early Philips (Magnavox) players sounding "different" in=20
one of those tests, I agree. It did sound different from the early
Japanese players. It was listenable, the early Sonys, Kyoceras,=20
and Technics players were not and that's MY opinion. =20
  #3   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

On Monday, December 10, 2012 6:17:06 PM UTC-5, Audio_Empire wrote:

The SR reviews are suspect due to SR's editorial policy which was=20
that everything printed in SR must serve the advertisers/potential
advertisers.=20


Science doesn't rely on editorial policies. Science relies on proper test m=
ethodology. Anyone interested can seek out the articles (try either major u=
rban public libraries or technical academic libraries) and see for themselv=
es how well these tests were carried out.

That meant no critical evaluations of anything. Ever=20
wonder why SR never published a negative review from Julian
Hirsch? Because it was SR policy to not publish negative reviews.
That didn't mean that Julian never came across a piece of equipment
that didn't meet its public specs. It simply meant that SR didn't run
the review, that's all. You see, it was their editorial policy to cater
to the industry, not the consumer. It is because of this policy that
the late J. Gordon Holt founded Stereophile. His stint at High-Fidelity=

=20
Magazine (and I believe that he also worked at SR for a time too)
convinced him that these magazines weren't serving the interest=20
of the consumer. That's also why that no one should be surprised
that SR's tests on the audibility of components, including CD players,
show no differences in audible performance. It's also where the joke
"quote" from Julian Hirsch goes like this: "of all the amplifiers that I
have ever tested, this was one of them" That "quote" applies to=20
tuners, CD decks, preamps, receivers, you name it. And no, Julian=20
never really said that, but if you read the sum-total of his work,=20
including going back to "Hirsch-Houck" labs before Julian went off
on his own, you will see that he never had an opinion. He just=20
measured the equipment against its published specs, and if it met
them, it was good for go. If not, that fact was never mentioned (as
far as I know and I subscribed to SR for decades!) and the reviews
were not published. The fact that to SR, everything sounded the same
was sacrosanct. I don't wonder that all of those "DBTs" showed no=20
difference in CD players.
=20

Subsequent research has pretty much vindicated Hirsch, but that's the subje=
ct for another thread.

BTW, the idea that a guy who thought all properly functioning amps sounded =
alike was serving his advertisers is ridiculous. For service to advertisers=
, Stereophile (along with TAS) takes the cake.

snip

As for the early Philips (Magnavox) players sounding "different" in=20
one of those tests, I agree. It did sound different from the early
Japanese players. It was listenable, the early Sonys, Kyoceras,=20
and Technics players were not and that's MY opinion.


The biggest trouble with high-end audio ever since the term was coined is t=
he mistaken confusion of opinion with fact.

bob
  #4   Report Post  
Posted to rec.audio.high-end
Arny Krueger[_5_] Arny Krueger[_5_] is offline
external usenet poster
 
Posts: 239
Default A Brief History of CD DBTs

"Audio_Empire" wrote in message
...

The SR reviews are suspect due to SR's editorial policy which was
that everything printed in SR must serve the advertisers/potential
advertisers. That meant no critical evaluations of anything. Ever
wonder why SR never published a negative review from Julian
Hirsch? Because it was SR policy to not publish negative reviews.]


Looks like Stereo Review is being stigmatized for doing what other magazines
do without being noticed.

For example, virtually every product ever reveiwed by Stereophile this
millenium shows up on their Recommended Components List (RCL)

I personally agree with editors who seem to take the viewpoint that they
don't have any space for reviews of equipment that is substandard.

  #5   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 10, 7:35*pm, wrote:
On Monday, December 10, 2012 6:17:06 PM UTC-5, Audio_Empire wrote:
The SR reviews are suspect due to SR's editorial policy which was
that everything printed in SR must serve the advertisers/potential
advertisers.


Science doesn't rely on editorial policies.


That is true but Stereo Review did.

Science relies on proper test methodology.



That is true but Stereo Review did not.

Anyone interested can seek out the articles (try either major urban public libraries or technical academic libraries) and see for themselves how well these tests were carried out.



Very poorly. Clearly Stereo review was a publication that had a very
clear preconception about how certain components sound. Clearly Stereo
Review was not a scientific journal and had no proper peer review
process.









That meant no critical evaluations of anything. Ever
wonder why SR never published a negative review from Julian
Hirsch? Because it was SR policy to not publish negative reviews.
That didn't mean that Julian never came across a piece of equipment
that didn't meet its public specs. It simply meant that SR didn't run
the review, that's all. You see, it was their editorial policy to cater
to the industry, not the consumer. It is because of this policy that
the late J. Gordon Holt founded Stereophile. His stint at High-Fidelity
Magazine (and I believe that he also worked at SR for a time too)
convinced him that these magazines weren't serving the interest
of the consumer. *That's also why that no one should be surprised
that SR's tests on the audibility of components, including CD players,
show no differences in audible performance. It's also where the joke
"quote" from Julian Hirsch goes like this: "of all the amplifiers that I
have ever tested, this was one of them" That "quote" applies to
tuners, CD decks, preamps, receivers, you name it. And no, Julian
never really said that, but if you read the sum-total of his work,
including going back to "Hirsch-Houck" labs before Julian went off
on his own, you will see that he never had an opinion. He just
measured the equipment against its published specs, and if it met
them, it was good for go. If not, that fact was never mentioned (as
far as I know and I subscribed to SR for decades!) and the reviews
were not published. The fact that to SR, everything sounded the same
was sacrosanct. I don't wonder that all of those "DBTs" showed no
difference in CD players.


Subsequent research has pretty much vindicated Hirsch, but that's the subject for another thread.


Since you are waving the science flag please show us the peer reviewed
published research that has "pretty much vindicated Hirsch."



BTW, the idea that a guy who thought all properly functioning amps sounded alike was serving his advertisers is ridiculous. For service to advertisers, Stereophile (along with TAS) takes the cake.


Sorry but that is nonsense. Unlike Stereo Review. TAS and Stereophile
were actually willing to print negative reviews of products. early on
neither publication even accepted advertising. So how were they in
"service to advertisers" then?


snip

As for the early Philips (Magnavox) players sounding "different" in
one of those tests, I agree. It did sound different from the early
Japanese players. It was listenable, the early Sonys, Kyoceras,
and Technics players were not and that's MY opinion.


The biggest trouble with high-end audio ever since the term was coined is the mistaken confusion of opinion with fact.

Then show us the science that establishes the facts. Until then I will
say back at you. Looks to me like you are mistaking your opinions as
facts.



  #6   Report Post  
Posted to rec.audio.high-end
Arny Krueger[_5_] Arny Krueger[_5_] is offline
external usenet poster
 
Posts: 239
Default A Brief History of CD DBTs

"Scott" wrote in message
...
On Dec 10, 7:35 pm, wrote:

Very poorly. Clearly Stereo review was a publication that had a very
clear preconception about how certain components sound.


That is not clear to me at all.

I am of the opinion that many people are biased against Stereo Review and
make posts like the one above regardless of whatever facts can be brought to
the discussion.



Clearly Stereo Review was not a scientific journal and had no proper peer
review process.


Neither are any of the journals you priase such as Stereophile or TAS. The
above statement is obviously an attempt to single out one magazine of many
for a situation that affected them all. In short it supports my supposition
that its author is highly biased against SR.

  #7   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

On Wednesday, December 12, 2012 9:20:22 AM UTC-5, Scott wrote:

Very poorly. Clearly Stereo review was a publication that had a very
clear preconception about how certain components sound. Clearly Stereo
Review was not a scientific journal and had no proper peer review
process.


True, but lack of peer review only means that their methodology has not bee=
n independently validated; it does not mean that their methodology is flawe=
d. The open-minded audiophile (obviously a minority taste, alas) should rea=
d those articles--and all the reports I cited--and decide for himself wheth=
er the methodology seems sound.=20

As for preconceptions, every scientist has some preconception of how his ex=
periment will turn out. If SR's preconception was that all CD players sound=
alike, they must have been quite surprised to find an exception in their 1=
8986 test!=20

Since you are waving the science flag please show us the peer reviewed
published research that has "pretty much vindicated Hirsch."


Gladly, but, as I said, it is the subject of another thread. Give me a day =
or two.

Then show us the science that establishes the facts. Until then I will
say back at you. Looks to me like you are mistaking your opinions as
facts.


I did. I presented multiple tests of dozens of devices over a period of dec=
ades by three different organizations. It is a fact that none of these test=
s could show audible differences between conventionally designed CD players=
and DACs. It is further a fact that no one has ever presented even a singl=
e empirically plausible counterexample to these findings. And it is further=
a fact that a peer-reviewed textbook (and there is nothing more carefully =
peer-reviewed than a science textbook) agrees with these findings.

bob
  #8   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 12, 2:50=A0pm, wrote:
On Wednesday, December 12, 2012 9:20:22 AM UTC-5, Scott wrote:
Very poorly. Clearly Stereo review was a publication that had a very
clear preconception about how certain components sound. Clearly Stereo
Review was not a scientific journal and had no proper peer review
process.


True, but lack of peer review only means that their methodology has not b=

een independently validated; it does not mean that their methodology is fla=
wed.


But we really don't know. Actually we do know. It was quite flawed It
would have never made it through the peer review process. No big deal
but it ain't science.


The open-minded audiophile (obviously a minority taste, alas) should read=

those articles--and all the reports I cited--and decide for himself whethe=
r the methodology seems sound.


I did back in the day and found them very flawed.



As for preconceptions, every scientist has some preconception of how his =

experiment will turn out. If SR's preconception was that all CD players sou=
nd alike, they must have been quite
surprised to find an exception in their 18986 test!


I'm sure if they did find an exception they were surprised.



Since you are waving the science flag please show us the peer reviewed
published research that has "pretty much vindicated Hirsch."


Gladly, but, as I said, it is the subject of another thread. Give me a da=

y or two.

I don't see why it won't fit just fine in this thread. But we'll see
what you come up with in a day or two.



Then show us the science that establishes the facts. Until then I will
say back at you. Looks to me like you are mistaking your opinions as
facts.


I did.


No, you showed us absolutely no legitimate science. You showed us
nothing more than non scientific articles from non scientific consumer
magazines.


I presented multiple tests of dozens of devices over a period of decades =

by three different organizations. It is a fact that none of these tests cou=
ld show audible differences between
conventionally designed CD players and DACs. It is further a fact that no=

one has ever presented even a single empirically plausible counterexample =
to these findings. And it is further a fact that a peer-reviewed textbook=
(and there is nothing more carefully peer-reviewed than a science textbook=
) agrees with these findings.

You cherry picked from anecdotal evidence that has never met the basic
criteria of real scientific research. Pretty far from real science. If
you are going to wave the flag you better bring the goods. You ain't
gonna find the goods in consumer magazines.

  #9   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

On Thursday, December 13, 2012 6:48:37 AM UTC-5, Scott wrote:

But we really don't know. Actually we do know. It was quite flawed It
would have never made it through the peer review process. No big deal
but it ain't science.


Flawed in what specific ways? And, no, "the researchers had a pre-test hypothesis about the outcome" is not a flaw. If it were, there would be no science.

Just as a point of comparison, what did the SR folks do wrong in each of their *three* tests that these guys did right:

http://www.aes.org/e-lib/browse.cfm?elib=14195

Needless to say, the latter *did* pass peer review.

snip

You cherry picked


I did? What evidence did I ignore?

rom anecdotal evidence that has never met the basic
criteria of real scientific research. Pretty far from real science. If
you are going to wave the flag you better bring the goods. You ain't
gonna find the goods in consumer magazines.


One would think a widely used college science textbook would count as "real science." And one would think that if consumer magazines are getting the same results reported by "real scientists," the magazines must be doing something right.

Once again, where is the counterevidence? Where are the DBTs that show these devices to be audibly distinguishable? I won't even hold you to meeting the strictest requirements of peer review. Just show me something empirically plausible.

bob
  #10   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Monday, December 10, 2012 7:35:04 PM UTC-8, wrote:
On Monday, December 10, 2012 6:17:06 PM UTC-5, Audio_Empire wrote:



The SR reviews are suspect due to SR's editorial policy which was


that everything printed in SR must serve the advertisers/potential


advertisers.




Science doesn't rely on editorial policies.


No, but publications do.

Science relies on proper test methodology. Anyone interested can seek out the articles (try either major urban public libraries or technical academic libraries) and see for themselves how well these tests were carried out.

The idea that a suite of tests which only seeks to confirm a set of
published specs for a unit under test is not, in my estimation, good
science. The further fact that The editorial policy at both SR and
High-Fidelity was that if a component didn't meet specs, the review
was not published, is also not doing good service to one's readership.




That meant no critical evaluations of anything. Ever


wonder why SR never published a negative review from Julian


Hirsch? Because it was SR policy to not publish negative reviews.


That didn't mean that Julian never came across a piece of equipment


that didn't meet its public specs. It simply meant that SR didn't run


the review, that's all. You see, it was their editorial policy to cater


to the industry, not the consumer. It is because of this policy that


the late J. Gordon Holt founded Stereophile. His stint at High-Fidelity


Magazine (and I believe that he also worked at SR for a time too)


convinced him that these magazines weren't serving the interest


of the consumer. That's also why that no one should be surprised


that SR's tests on the audibility of components, including CD players,


show no differences in audible performance. It's also where the joke


"quote" from Julian Hirsch goes like this: "of all the amplifiers that I


have ever tested, this was one of them" That "quote" applies to


tuners, CD decks, preamps, receivers, you name it. And no, Julian


never really said that, but if you read the sum-total of his work,


including going back to "Hirsch-Houck" labs before Julian went off


on his own, you will see that he never had an opinion. He just


measured the equipment against its published specs, and if it met


them, it was good for go. If not, that fact was never mentioned (as


far as I know and I subscribed to SR for decades!) and the reviews


were not published. The fact that to SR, everything sounded the same


was sacrosanct. I don't wonder that all of those "DBTs" showed no


difference in CD players.




Subsequent research has pretty much vindicated Hirsch, but that's the subject for another thread.


Really? Science has vindicated a non-critical approach to evaluation? Since when?



BTW, the idea that a guy who thought all properly functioning amps sounded alike was serving his advertisers is ridiculous. For service to advertisers, Stereophile (along with TAS) takes the cake



Well there you are wrong. I have written for both TAS and Stereophile
over the years, and no one ever told me how to slant a review. If I
found something negative, I said so in no uncertain terms and they
both published those reviews with all my comments intact. Both
Stereophile and TAS started out accepting NO ads, then they
"graduated" to taking ads only from dealers, and finally from
manufacturers. Both magazines' policy toward advertisers is pretty
much the same: We'll take your ads with the understanding that the
fact that you are an advertiser will have no bearing on the outcome
of reviews of your products. Both magazines have a list of not a few
manufacturers who refuse to advertise with them and won't send
them equipment to review any more because they previously
received a bad review at the hands of one or the other.

..



snip



As for the early Philips (Magnavox) players sounding "different" in


one of those tests, I agree. It did sound different from the early


Japanese players. It was listenable, the early Sonys, Kyoceras,


and Technics players were not and that's MY opinion.




The biggest trouble with high-end audio ever since the term was coined is the mistaken confusion of opinion with fact.



I would say that's more than somewhat true. But often,
opinions merely mirror facts. Cable elevators, green
marking pens, blocks of myrtle wood placed on the tops
of components, "cryogenically treated" clocks
and cable sound are all unsupported mythology, but early CD
players that sounded nasty to a rather large group
of people definitely mirror facts. Let's face it, not everyone is
a critical listener. That's a facility that one has to nurture, its
not God-given. And as was discussed ad-nauseum in another
thread, there are people who are so biased toward certain
precepts that they wouldn't hear things that challenged their
biases even if that characteristic stuck-out like a sore thumb!


  #11   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Tuesday, December 11, 2012 2:22:08 PM UTC-8, Arny Krueger wrote:
"Audio_Empire" wrote in message

...



The SR reviews are suspect due to SR's editorial policy which was


that everything printed in SR must serve the advertisers/potential


advertisers. That meant no critical evaluations of anything. Ever


wonder why SR never published a negative review from Julian


Hirsch? Because it was SR policy to not publish negative reviews.]




Looks like Stereo Review is being stigmatized for doing what other magazines

do without being noticed.


What magazines would they be, Mr. Kruger?



For example, virtually every product ever reveiwed by Stereophile this

millenium shows up on their Recommended Components List (RCL)


That's simply a very misleading statement. (1) Not everything published
in Stereophile makes it to the Recommended Components list. and (2)
those that do are categorized according their perceived flaws and listed
under an alphabetical hierarchy. To wit: "A" is state of the art, and "D" is
very flawed but still acceptable. I've seen lots of critical reviews in
Stereophile.



I personally agree with editors who seem to take the viewpoint that they

don't have any space for reviews of equipment that is substandard.


And that serves the readership, how? Seems to me that serves the
advertisers. "Yeah, your new amplifier is lousy, but we won't tell
anybody about it. OK? And while were on the phone, you want to
buy a new ad?"

Gimme a break!
  #12   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Wednesday, December 12, 2012 2:49:16 PM UTC-8, Arny Krueger wrote:
"Scott" wrote in message

...

On Dec 10, 7:35 pm, wrote:



Very poorly. Clearly Stereo review was a publication that had a very


clear preconception about how certain components sound.




That is not clear to me at all.



I am of the opinion that many people are biased against Stereo Review and

make posts like the one above regardless of whatever facts can be brought to

the discussion.







Clearly Stereo Review was not a scientific journal and had no proper peer


review process.




Neither are any of the journals you priase such as Stereophile or TAS. The

above statement is obviously an attempt to single out one magazine of many

for a situation that affected them all. In short it supports my supposition

that its author is highly biased against SR.


I don't remember anybody praising either TAS or Stereophile. All magazines
fall under the heading of "Entertainment", and should be taken with a grain
of salt. The only thing that Stereophile and TAS does that's different from
Stereo Review and High-Fidelity is that if a review of a piece of equipment
comes out negative, they publish it. OTOH, both SR and High-Fidelity were
better reads than either Stereophile or TAS. I learned a lot about music and
musicians from those "slicks". HF especially, was once a very classy publication.
They had writers like Gene Lees writing articles about jazz, and latin music, and
writers like Nicholas Slonimsky writing about classical music and classical
artists.

  #13   Report Post  
Posted to rec.audio.high-end
Arny Krueger[_5_] Arny Krueger[_5_] is offline
external usenet poster
 
Posts: 239
Default A Brief History of CD DBTs

"Audio_Empire" wrote in message
...

I don't remember anybody praising either TAS or Stereophile.


In another post you mention writing for one or both of these publications.

If you cannot bring yourself to praise them, how can you bring yourself to
write for them?

Or, was the money they paid sufficient to induce you to write for a
publication that was so bad that you did not think they were praiseworthy?


  #14   Report Post  
Posted to rec.audio.high-end
Sebastian Kaliszewski Sebastian Kaliszewski is offline
external usenet poster
 
Posts: 82
Default A Brief History of CD DBTs

On 12/13/2012 12:48 PM, Scott wrote:
On Dec 12, 2:50 pm, wrote:
On Wednesday, December 12, 2012 9:20:22 AM UTC-5, Scott wrote:
Very poorly. Clearly Stereo review was a publication that had a
very clear preconception about how certain components sound.
Clearly Stereo Review was not a scientific journal and had no
proper peer review process.


True, but lack of peer review only means that their methodology
has not been independently validated; it does not mean that their
methodology is flawed.



But we really don't know. Actually we do know. It was quite flawed
It would have never made it through the peer review process. No big
deal but it ain't science.


The open-minded audiophile (obviously a minority taste, alas)
should read those articles--and all the reports I cited--and decide for

himself whether the methodology seems sound.


I did back in the day and found them very flawed.


Would you care to present those alleged flaws?




As for preconceptions, every scientist has some preconception of
how his experiment will turn out. If SR's preconception was that all CD

players sound alike, they must have been quite
surprised to find an exception in their 18986 test!


I'm sure if they did find an exception they were surprised.


They did and published that. Contrary to what you and Mr. Audio Empire
stated more than once about them that they never would.


Since you are waving the science flag please show us the peer
reviewed published research that has "pretty much vindicated
Hirsch."


Gladly, but, as I said, it is the subject of another thread. Give
me a day or two.


I don't see why it won't fit just fine in this thread. But we'll see
what you come up with in a day or two.


So please include in this very thread scientific (or even half way
ameteurish-sceintific) evidence to support your stance.




Then show us the science that establishes the facts. Until then I
will say back at you. Looks to me like you are mistaking your
opinions as facts.


I did.


No, you showed us absolutely no legitimate science. You showed us
nothing more than non scientific articles from non scientific
consumer magazines.


You apparently missed this one:
Moore, BCJ. An Introduction to the Psychology of Hearing, Fourth
Edition. San Diego: Academic Press, 1997.



I presented multiple tests of dozens of devices over a period of
decades by three different organizations. It is a fact that none of
these tests could show audible differences between
conventionally designed CD players and DACs. It is further a fact
that no one has ever presented even a single empirically plausible
counterexample to these findings. And it is further a fact that a
peer-reviewed textbook (and there is nothing more carefully
peer-reviewed than a science textbook) agrees with these findings.

You cherry picked from anecdotal evidence that has never met the
basic criteria of real scientific research. Pretty far from real
science. If you are going to wave the flag you better bring the
goods. You ain't gonna find the goods in consumer magazines.


See above.


Then...

Lack of any scientific (or even halfway amateur-sceintific like the one
you're criticizing) evidence to support your position noted.

rgds
\SK

  #15   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 13, 2:44=A0pm, wrote:
On Thursday, December 13, 2012 6:48:37 AM UTC-5, Scott wrote:
But we really don't know. Actually we do know. It was quite flawed It
would have never made it through the peer review process. No big deal
but it ain't science.


Flawed in what specific ways? And, no, "the researchers had a pre-test hy=

pothesis about the outcome" is not a flaw. If it were, there would be no sc=
ience.

It has been quite a while since that article came out. I would be
happy to review it again and point out the problems I found with it at
the time if you would care to email me a copy. I don't make a point of
memorizing these things for decades to come.


Just as a point of comparison, what did the SR folks do wrong in each of =

their *three* tests that these guys did right:

http://www.aes.org/e-lib/browse.cfm?elib=3D14195


I would be happy to read that paper and compare it to the Stereo
Review article if you'd like to email me a copy of that one as well.


Needless to say, the latter *did* pass peer review.

Which certainly does give it far more credibility on a scientific
level.

You cherry picked


I did? What evidence did I ignore?


Do you really think you got them all?


rom anecdotal evidence that has never met the basic
criteria of real scientific research. Pretty far from real science. If
you are going to wave the flag you better bring the goods. You ain't
gonna find the goods in consumer magazines.


One would think a widely used college science textbook would count as "re=

al science."

It would likely count as a book that talks about real science. What on
earth does that have to do with the articles you cited from consumer
magazines?

And one would think that if consumer magazines are getting the same resul=

ts reported by "real scientists," the magazines must be doing something rig=
ht.


1. Not sure why one would think that. One could just as easily get the
same results with a coin flip. 2. I'm not so sure they were testing
the same things nor getting "the same results." 3. One peer reviewed
paper on one set of listening tests is certainly legitimate scientific
evidence. It is not something one uses to declare a final dogmatic
objective fact.


Once again, where is the counterevidence? Where are the DBTs that show th=

ese devices to be audibly distinguishable? I won't even hold you to meeting=
the strictest requirements of peer review. Just show me something empi=
rically plausible.

Why would you not hold me to the strictest requirements? Are we going
to stick with science or swap anecdotes?
I would be happy to read the one peer reviewed paper you have now
brought up. But the articles from Stereo Review are junk in the world
of real science. Doesn't matter if they wrought similar results to one
legitimate set of tests. And if you want to send me copies of the
articles by Stereo review I would be happy to review them point out
the problems I saw with them back in the day.




  #16   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 14, 3:55=A0am, Sebastian Kaliszewski
wrote:
On 12/13/2012 12:48 PM, Scott wrote: On Dec 12, 2:50 pm, nabo...@hotmail=

..com wrote:
On Wednesday, December 12, 2012 9:20:22 AM UTC-5, Scott wrote:
Very poorly. Clearly Stereo review was a publication that had a
very clear preconception about how certain components sound.
Clearly Stereo Review was not a scientific journal and had no
proper peer review process.


True, but lack of peer review only means that their methodology
has =A0not been independently validated; it does not mean that their


=A0 methodology is flawed.

But we really don't know. Actually we do know. It was quite flawed
It would have never made it through the peer review process. No big
deal but it ain't science.


The open-minded audiophile (obviously a minority taste, alas)
should =A0read those articles--and all the reports I cited--and decide=

for


himself whether the methodology seems sound.



I did back in the day and found them very flawed.


Would you care to present those alleged flaws?


I would be happy to if someone wants to send me a copy of the old
article to jog my memory. If it aint Shakespeare or something like it
I rarely memorize such things.




As for preconceptions, every scientist has some preconception of
how =A0his experiment will turn out. If SR's preconception was that al=

l CD

players sound alike, they must have been quite

surprised to find an exception in their 18986 test!


I'm sure if they did find an exception they were surprised.


They did and published that. Contrary to what you and Mr. Audio Empire
stated more than once about them that they never would.



Since you are waving the science flag please show us the peer
reviewed published research that has "pretty much vindicated
Hirsch."


Gladly, but, as I said, it is the subject of another thread. Give
me =A0a day or two.


I don't see why it won't fit just fine in this thread. But we'll see
what you come up with in a day or two.


So please include in this very thread scientific (or even half way
ameteurish-sceintific) evidence to support your stance.


Do you actually understand my stance? If so I think you would have no
trouble finding a mountain of support from the scientific community in
support of "my stance."




Then show us the science that establishes the facts. Until then I
will say back at you. Looks to me like you are mistaking your
opinions as facts.


I did.


No, you showed us absolutely no legitimate science. You showed us
nothing more than non scientific articles from non scientific
consumer magazines.


You apparently missed this one:
Moore, BCJ. An Introduction to the Psychology of Hearing, Fourth
Edition. San Diego: Academic Press, 1997.


That wasn't on Bob's list in his OP.




I presented multiple tests of dozens of devices over a period of


=A0 decades by three different organizations. It is a fact that none of
=A0 these tests could show audible differences between conventionally=

designed CD players and DACs. It is further a fact

=A0 that no one has ever presented even a single empirically plausible
=A0 counterexample to these findings. And it is further a fact that a
=A0 peer-reviewed textbook (and there is nothing more carefully
=A0 peer-reviewed than a science textbook) agrees with these findings.



You cherry picked from anecdotal evidence that has never met the
basic criteria of real scientific research. Pretty far from real
science. If you are going to wave the flag you better bring the
goods. You ain't gonna find the goods in consumer magazines.


See above.


See my answer to the above, above.



Then...

Lack of any scientific (or even halfway amateur-sceintific like the one
you're criticizing) evidence to support your position noted.


Really? Me thinks you don't really understand "my position" if you
think it is not supported by science. Just for kicks lets see if you
can review the thread and accurately restate my position. We can go
from there.

  #17   Report Post  
Posted to rec.audio.high-end
Doug McDonald[_6_] Doug McDonald[_6_] is offline
external usenet poster
 
Posts: 57
Default A Brief History of CD DBTs

One of the problems I see in many ABX double blind tests,
especially but not exclusively speakers, is level matching.

In some cases careful level matching may be a proper thing to do.
But in others I contend it is not.

The problem with trying to match levels is that level differences
are usually easy to spot. And this applies to frequency response
differences.

I contend that a better way, though much harder, to reliably
detect small differences of all kinds is to set the overall
level of A and B to be the same, and then for each trial of
A, B, or X to vary the level by a random amount of 0 +- 1 or
0 +- 2 dB. Of course this requires lots of trials to get
good statistics.

But once done, if the test is positive and the participants
decide that A has let's say "better imaging" then it is much clearer
that what they are not doing is deciding, for example, that
louder has better imaging, because they would have to hear the imaging
effect "through" differences in level.

Etc.

Doug McDonald

  #18   Report Post  
Posted to rec.audio.high-end
Jenn[_2_] Jenn[_2_] is offline
external usenet poster
 
Posts: 2,752
Default A Brief History of CD DBTs

In article ,
Audio_Empire wrote:

Nicholas Slonimsky writing about classical music and classical
artists.


Slonimsky was an absolutely one of a kind wonder of the world. He was
also my composition teacher in the 70s.

  #19   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 14, 9:07*am, Doug McDonald wrote:
One of the problems I see in many ABX double blind tests,
especially but not exclusively speakers, is level matching.

In some cases careful level matching may be a proper thing to do.
But in others I contend it is not.

The problem with trying to match levels is that level differences
are usually easy to spot. And this applies to frequency response
differences.

I contend that a better way, though much harder, to reliably
detect small differences of all kinds is to set the overall
level of A and B to be the same, and then for each trial of
A, B, or X to vary the level by a random amount of 0 +- 1 or
0 +- 2 dB. *Of course this requires lots of trials to get
good statistics.

But once done, if the test is positive and the participants
decide that A has let's say "better imaging" then it is much clearer
that what they are not doing is deciding, for example, that
louder has better imaging, because they would have to hear the imaging
effect "through" differences in level.

Etc.

Doug McDonald


I don't think level matching is ever the wrong thing to do in an *ABX*
DBT. It is important to understand that the only purpose an *ABX* DBT
serves is to test for audible differences not for preferences. With
blind *preference* tests level matching is a complicated issue as you
point out.

  #20   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Thursday, December 13, 2012 7:14:05 PM UTC-8, Arny Krueger wrote:
"Audio_Empire" wrote in message

...



I don't remember anybody praising either TAS or Stereophile.




In another post you mention writing for one or both of these publications.


Yeah? so?



If you cannot bring yourself to praise them, how can you bring yourself to

write for them?


Who said that I can't bring myself to praise them? All I said was that nobody
in these recent threads had praised them. OTOH, I have defended them
against contributors who make statements about their editorial policies
that aren't correct, but that's not praise, it's merely setting the record
straight. Someone else made the same points I did, and they weren't praising
them either.



Or, was the money they paid sufficient to induce you to write for a
publication that was so bad that you did not think they were praiseworthy?


My reasons for leaving them, as well as my monetary renumeration are
no one's business but mine, but I will say that it was due to none of the
above.



  #21   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Friday, December 14, 2012 11:11:42 AM UTC-8, Scott wrote:
On Dec 14, 9:07=A0am, Doug McDonald wrote:
=20
One of the problems I see in many ABX double blind tests,

=20
especially but not exclusively speakers, is level matching.

=20

=20
In some cases careful level matching may be a proper thing to do.

=20
But in others I contend it is not.

=20

=20
The problem with trying to match levels is that level differences

=20
are usually easy to spot. And this applies to frequency response

=20
differences.

=20

=20
I contend that a better way, though much harder, to reliably

=20
detect small differences of all kinds is to set the overall

=20
level of A and B to be the same, and then for each trial of

=20
A, B, or X to vary the level by a random amount of 0 +- 1 or

=20
0 +- 2 dB. =A0Of course this requires lots of trials to get

=20
good statistics.

=20

=20
But once done, if the test is positive and the participants

=20
decide that A has let's say "better imaging" then it is much clearer

=20
that what they are not doing is deciding, for example, that

=20
louder has better imaging, because they would have to hear the imaging

=20
effect "through" differences in level.

=20

=20
Etc.

=20

=20
Doug McDonald

=20
=20
=20
I don't think level matching is ever the wrong thing to do in an *ABX*
=20
DBT. It is important to understand that the only purpose an *ABX* DBT
=20
serves is to test for audible differences not for preferences. With
=20
blind *preference* tests level matching is a complicated issue as you
=20
point out.


I think it's de riguer to match levels in any type of formal listening test=
s and it
needs to be done within a half of a dB =96 or better. The human ear will al=
ways
pick out the louder component as being the better of the two, and of course=
,
with a level difference, in an ABX test the listeners will always say there=
's a=20
difference between the devices under test, even if they are both two sample=
s
of the same make and model!
  #22   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 14, 12:29=A0pm, Audio_Empire wrote:
On Friday, December 14, 2012 11:11:42 AM UTC-8, Scott wrote:
On Dec 14, 9:07=A0am, Doug McDonald wrote:


One of the problems I see in many ABX double blind tests,


especially but not exclusively speakers, is level matching.


In some cases careful level matching may be a proper thing to do.


But in others I contend it is not.


The problem with trying to match levels is that level differences


are usually easy to spot. And this applies to frequency response


differences.


I contend that a better way, though much harder, to reliably


detect small differences of all kinds is to set the overall


level of A and B to be the same, and then for each trial of


A, B, or X to vary the level by a random amount of 0 +- 1 or


0 +- 2 dB. =A0Of course this requires lots of trials to get


good statistics.


But once done, if the test is positive and the participants


decide that A has let's say "better imaging" then it is much clearer


that what they are not doing is deciding, for example, that


louder has better imaging, because they would have to hear the imagin=

g

effect "through" differences in level.


Etc.


Doug McDonald


I don't think level matching is ever the wrong thing to do in an *ABX*


DBT. It is important to understand that the only purpose an *ABX* DBT


serves is to test for audible differences not for preferences. With


blind *preference* tests level matching is a complicated issue as you


point out.


I think it's de riguer to match levels in any type of formal listening te=

sts and it
needs to be done within a half of a dB =96 or better. The human ear will =

always
pick out the louder component as being the better of the two, and of cour=

se,
with a level difference, in an ABX test the listeners will always say the=

re's a
difference between the devices under test, even if they are both two samp=

les
of the same make and model!


for the sake of ABX for sure. OTOH for preference comparisons we
simply may have real problems doing a true level match. Let's say we
are comparing two different masterings of the same title. One is
compressed and both have substantially different EQ. How do you level
match? Peak levels? average levels? At what frequency?

  #23   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Friday, December 14, 2012 1:38:20 PM UTC-8, Scott wrote:
On Dec 14, 12:29=A0pm, Audio_Empire wrote:
=20
On Friday, December 14, 2012 11:11:42 AM UTC-8, Scott wrote:

=20
On Dec 14, 9:07=A0am, Doug McDonald wrote:

=20

=20
One of the problems I see in many ABX double blind tests,

=20

=20
especially but not exclusively speakers, is level matching.

=20

=20
In some cases careful level matching may be a proper thing to do.

=20

=20
But in others I contend it is not.

=20

=20
The problem with trying to match levels is that level differences

=20

=20
are usually easy to spot. And this applies to frequency response

=20

=20
differences.

=20

=20
I contend that a better way, though much harder, to reliably

=20

=20
detect small differences of all kinds is to set the overall

=20

=20
level of A and B to be the same, and then for each trial of

=20

=20
A, B, or X to vary the level by a random amount of 0 +- 1 or

=20

=20
0 +- 2 dB. =A0Of course this requires lots of trials to get

=20

=20
good statistics.

=20

=20
But once done, if the test is positive and the participants

=20

=20
decide that A has let's say "better imaging" then it is much cleare=

r
=20

=20
that what they are not doing is deciding, for example, that

=20

=20
louder has better imaging, because they would have to hear the imag=

ing
=20

=20
effect "through" differences in level.

=20

=20
Etc.

=20

=20
Doug McDonald

=20

=20
I don't think level matching is ever the wrong thing to do in an *ABX=

*
=20

=20
DBT. It is important to understand that the only purpose an *ABX* DBT

=20

=20
serves is to test for audible differences not for preferences. With

=20

=20
blind *preference* tests level matching is a complicated issue as you

=20

=20
point out.

=20

=20
I think it's de riguer to match levels in any type of formal listening =

tests and it
=20
needs to be done within a half of a dB =96 or better. The human ear wil=

l always
=20
pick out the louder component as being the better of the two, and of co=

urse,
=20
with a level difference, in an ABX test the listeners will always say t=

here's a
=20
difference between the devices under test, even if they are both two sa=

mples
=20
of the same make and model!

=20
=20
=20
for the sake of ABX for sure. OTOH for preference comparisons we
=20
simply may have real problems doing a true level match. Let's say we
=20
are comparing two different masterings of the same title. One is
=20
compressed and both have substantially different EQ. How do you level
=20
match? Peak levels? average levels? At what frequency?


Well obviously I was talking about equipment evaluations, not source=20
evaluations. Problem here is that there is no real standard for recordings
They seem to be all over the map. So yes, that would be difficult. Even if=
=20
you used a test tape (for tape decks) or a test CD to calibrate the CD=20
players, it doesn't mean anything unless the recordings in question were
calibrated to the same standard. With tape, this was possible in the days
of Dolby "A" or Dolby "B" because the tapes had a Dolby calibration tone=20
at the beginning and the end of the tape. Sadly commercial CDs don't=20
have that.
  #24   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 14, 8:21=A0pm, Audio_Empire wrote:
On Friday, December 14, 2012 1:38:20 PM UTC-8, Scott wrote:
On Dec 14, 12:29=A0pm, Audio_Empire wrote:


On Friday, December 14, 2012 11:11:42 AM UTC-8, Scott wrote:


On Dec 14, 9:07=A0am, Doug McDonald wrote:


One of the problems I see in many ABX double blind tests,


especially but not exclusively speakers, is level matching.


In some cases careful level matching may be a proper thing to do.


But in others I contend it is not.


The problem with trying to match levels is that level differences


are usually easy to spot. And this applies to frequency response


differences.


I contend that a better way, though much harder, to reliably


detect small differences of all kinds is to set the overall


level of A and B to be the same, and then for each trial of


A, B, or X to vary the level by a random amount of 0 +- 1 or


0 +- 2 dB. =A0Of course this requires lots of trials to get


good statistics.


But once done, if the test is positive and the participants


decide that A has let's say "better imaging" then it is much clea=

rer

that what they are not doing is deciding, for example, that


louder has better imaging, because they would have to hear the im=

aging

effect "through" differences in level.


Etc.


Doug McDonald


I don't think level matching is ever the wrong thing to do in an *A=

BX*

DBT. It is important to understand that the only purpose an *ABX* D=

BT

serves is to test for audible differences not for preferences. With


blind *preference* tests level matching is a complicated issue as y=

ou

point out.


I think it's de riguer to match levels in any type of formal listenin=

g tests and it

needs to be done within a half of a dB =96 or better. The human ear w=

ill always

pick out the louder component as being the better of the two, and of =

course,

with a level difference, in an ABX test the listeners will always say=

there's a

difference between the devices under test, even if they are both two =

samples

of the same make and model!


for the sake of ABX for sure. OTOH for preference comparisons we


simply may have real problems doing a true level match. Let's say we


are comparing two different masterings of the same title. One is


compressed and both have substantially different EQ. How do you level


match? Peak levels? average levels? At what frequency?


Well obviously I was talking about equipment evaluations, not source
evaluations.


The person who was questioning to value of level matching did not seem
to be limiting his opinion to CDPs and amps. You still have the same
problems in level matching that I stated above when dealing with
loudspeakers. In fact you have even more problems with radiation
pattern differences and room interfaces that make it even more
impossible to do a true level match.

  #25   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 14, 8:17=A0pm, Barkingspyder wrote:


The nice thing about testing for difference as ABX does is that if there =

is no difference detected you know that the more expensive one is not any b=
etter sounding. =A0Unless it has features you feel you must have or y=
ou just like the look better you can save some money. =A0Personally, I like=
knowing that a $2000.00 set of electronics is not going to be out performe=
d by a $20,000.00 set. =A0Speakers of course, (the part that you actually =
hear in a sound system) are another story entirely.

heck if it makes you feel better about buying less expensive gear I
guess that's nice. But you are putting way too much weight on such a
test if you think you walk away from a single null result "knowing"
that the more expensive gear is not better sounding. But hey, if it
makes you happy that's great. But not everyone is on board with you
there.


  #26   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Friday, December 14, 2012 8:14:31 PM UTC-8, Barkingspyder wrote:
[ A large number of blank lines were trimmed out of this
=20
response. -- dsr ]
=20
=20
=20
=20
=20
On Monday, December 10, 2012 3:17:06 PM UTC-8, Audio_Empire wrote:
=20
On Sunday, December 9, 2012 7:59:31 PM UTC-8, wrote:


The SR reviews are suspect due to SR's editorial policy which was=20

=20
=20

=20
that everything printed in SR must serve the advertisers/potential

=20
=20

=20
advertisers.=20

=20
=20
=20
Where does this knowledge come from?


J. Gordon Holt who worked for High-Fidelity and had friends who worked=20
for SR told me this many years ago. But even so, if you had read these rag=
s, i
t would be very apparent to even the most casual observer.
=20
=20
=20
=20
=20
That meant no critical evaluations of anything. Ever=20


Pretty much the way it has to be if you are merely a pipeline
for the industry's public relations.=20

wonder why SR never published a negative review from Julian


Hirsch? Because it was SR policy to not publish negative reviews.


Same question.


Well, they never did publish a negative revue. If a piece of equipment
didn't meet it's published specs on the bench, the review never made it to
print. That's just the way it was.=20
=20
=20
=20
That didn't mean that Julian never came across a piece of equipment
that didn't meet its public specs. It simply meant that SR didn't run
the review, that's all. You see, it was their editorial policy to cater
to the industry, not the consumer.

=20


So they were just like Stereophile?


No. Stereophile seems to grade components these days. If there are flaws,=
=20
the flaws are mentioned in the review. Neither SR or HF ever did that.=20

It is because of this policy that
the late J. Gordon Holt founded Stereophile. His stint at High-Fidelity=

=20
Magazine (and I believe that he also worked at SR for a time too)
convinced him that these magazines weren't serving the interest=20
of the consumer. That's also why that no one should be surprised
that SR's tests on the audibility of components, including CD players,
show no differences in audible performance.=20


Or maybe it's because there are so few instances to report.


Not likely. These magazines have been gone for a long time
During their heyday, there were lots of lousy components. Take=20
for instance the Dynaco Stereo 120 power amp. The original one
was lousy sounding (it had a crossover notch, fer chrissake!) and=20
unreliable. But Julian Hirsch said it was great.

It's also where the joke
"quote" from Julian Hirsch goes like this: "of all the amplifiers that =

I
have ever tested, this was one of them" That "quote" applies to=20
tuners, CD decks, preamps, receivers, you name it. And no, Julian=20
never really said that, but if you read the sum-total of his work,=20
including going back to "Hirsch-Houck" labs before Julian went off
on his own, you will see that he never had an opinion. He just=20
measured the equipment against its published specs, and if it met
them, it was good for go. If not, that fact was never mentioned (as
far as I know and I subscribed to SR for decades!) and the reviews
were not published. The fact that to SR, everything sounded the same
was sacrosanct. I don't wonder that all of those "DBTs" showed no=20
difference in CD players.


I distinctly recall a message in SR from Mr. Hirsch commenting on the fac=

t that there virtually no negative reviews. It was because virtually every=
thing does sound the same. I also recall that there were reviews that crit=
icized various things just can't recall what.

That's balderdash, especially in SR's hey-day but still, it was SR's editor=
ial stance: "Everything
sounds the same." That might have some truth to it today=20
(differences still exist, but they are very subtle and as I have said
before, largely of little consequence).=20
=20

I won't comment on the Sensible Sound tests because I've only seen
a couple of issues of that magazine and don't know what their=20
editorial policy was.=20

=20
As for the early Philips (Magnavox) players sounding "different" in=20
one of those tests, I agree. It did sound different from the early
Japanese players. It was listenable, the early Sonys, Kyoceras,=20
and Technics players were not and that's MY opinion.


The facts as I have discovered are this, that when components perform wit=

hin proper parameters nobody can hear a difference reliably. When they ope=
rate outside of those parameters they can be equalized so that they do and =
then the differences are no longer detectable. You are entitled to you opi=
nion of course, just recognize that it is at odds with what is known. If it=
sounds different it's either because it is not designed to perform the way=
it should or it's broken.

Well nobody can help those people who's biases cause them to leave their cr=
itical
facilities at the door and either cannot or will not hear what is there fo=
r all to hear.=20
  #27   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Friday, December 14, 2012 8:22:43 PM UTC-8, Barkingspyder wrote:
On Thursday, December 13, 2012 2:47:34 PM UTC-8, Audio_Empire wrote:

On Tuesday, December 11, 2012 2:22:08 PM UTC-8, Arny Krueger wrote:



And that serves the readership, how? Seems to me that serves the
advertisers. "Yeah, your new amplifier is lousy, but we won't tell
anybody about it. OK? And while were on the phone, you want to
buy a new ad?"


Gimme a break!




It serves them in that they know if it has been reviewed it can be trusted to perform as it supposed to, no audible coloration other than for tubed gear turntables and phono cartridges and tape decks.


Oh if that were only true!

I still remember the first CD I ever heard and I knew I had to have one if for no other reason than the absence of surface noise. There was so much more though, the clearer sound of everything, the attack of the percussion, and especially, the bass.


I still remember the first CD player I ever heard it was the Winter CES
in 1981 or 1982. It was a Sony prototype and my reaction was Yecch!
Today of course, most players are quite good, and it is possible to
master CDs that are so good that if they had done CDs that way
across the entire industry, there would have been no need to develop
SACD or DVD-A. Try one of the JVC XRCDs. They are truly state-of-
the-art. And the only thing special about them is that they were
very carefully mastered and manufactured. Most of todays CDs are
terribly compressed and limited and sound lousy. Even some modern
remastering of classic recordings from many pop stars. Contrast the
latest re-mastering of these performances with the earlier CD releases of
the same titles, and you'll see what I mean. CD can sound glorious
if done right. Too bad it so seldom is. Even for classical music.
  #28   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Saturday, December 15, 2012 8:08:10 AM UTC-8, Scott wrote:
On Dec 14, 8:21=A0pm, Audio_Empire wrote:
=20
On Friday, December 14, 2012 1:38:20 PM UTC-8, Scott wrote:

=20
On Dec 14, 12:29=A0pm, Audio_Empire wrote:

=20

=20
On Friday, December 14, 2012 11:11:42 AM UTC-8, Scott wrote:

=20

=20
On Dec 14, 9:07=A0am, Doug McDonald wrote=

:
=20


=20
The person who was questioning to value of level matching did not seem
=20
to be limiting his opinion to CDPs and amps. You still have the same
=20
problems in level matching that I stated above when dealing with
=20
loudspeakers. In fact you have even more problems with radiation
=20
pattern differences and room interfaces that make it even more
=20
impossible to do a true level match.


My experience is that with speakers, DBTs really aren't necessary. Speakers=
are all over the place
with respect to frequency response, distortion, radiation pattern, and sens=
itivity (efficiency), that it is a given that no two models sound the same.=
Speakers are best evaluated in one's own listening environment and over a =
period of several days. Not convenient, but because speakers are a system
heard in conjunction with the room in which they are playing, it is, alas n=
ecessary (but seldom done).
  #29   Report Post  
Posted to rec.audio.high-end
Andrew Haley Andrew Haley is offline
external usenet poster
 
Posts: 155
Default A Brief History of CD DBTs

Audio_Empire wrote:
My experience is that with speakers, DBTs really aren't necessary.


I don't think it's quite that, it's more that it's very hard to do.
Harman famously made a machine that could quickly exchange speakers so
that they could be compared the same position. With an opaque cloth,
this removed the physical appearance of the speakers from the
comparison so that the speakers could be evaluated by sound alone.
Audio reviewers could do the same.

Andrew.
  #30   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Saturday, December 15, 2012 8:08:32 AM UTC-8, Scott wrote:
On Dec 14, 8:17=A0pm, Barkingspyder wrote:
=20
=20
=20

=20
The nice thing about testing for difference as ABX does is that if ther=

e is no difference detected you know that the more expensive one is not any=
better sounding. =A0Unless it has features you feel you must have or=
you just like the look better you can save some money. =A0Personally, I li=
ke knowing that a $2000.00 set of electronics is not going to be out perfor=
med by a $20,000.00 set. =A0Speakers of course, (the part that you actuall=
y hear in a sound system) are another story entirely.
=20
=20
=20
heck if it makes you feel better about buying less expensive gear I
=20
guess that's nice. But you are putting way too much weight on such a
=20
test if you think you walk away from a single null result "knowing"
=20
that the more expensive gear is not better sounding. But hey, if it
=20
makes you happy that's great. But not everyone is on board with you
=20
there.


My sentiments exactly. I'm convinced that while DBTs work great for drug te=
sts, tests by food manufacturers about new or altered products, etc., I'm =
not terribly sure that they work for audio equipment because the waveform t=
hat we are "analyzing" with our collective ears is pretty complex. Now I'm =
sure that cables and interconnects are the exception. They are supposed to =
be simple conductors and therefore, going by the physics of conductors and =
their performance over a frequency
range, which are known quantities, they aren't supposed to have any affect =
on the signal at audio frequencies, and so, in a DBT they demonstrate that =
they don't.
=20
Otherwise for DACs, preamps and amps, there are certainly differences (in D=
ACs, especially) yet they don't show-up in DBTs and ABX tests. Granted, wit=
h modern, solid-state amps and preamps the differences are minute (and larg=
ely inconsequential), but they do show themselves in properly set up
DBT tests. Often it takes more than a few seconds of listening before the D=
UTs are switched, and some
characteristics like imaging and soundstage might not show-up at all with s=
ome types of music or certain recordings, but under the right circumstances=
these things can be heard in DBT. I've proved that
many times to MY OWN satisfaction. =20


  #31   Report Post  
Posted to rec.audio.high-end
Jenn[_2_] Jenn[_2_] is offline
external usenet poster
 
Posts: 2,752
Default A Brief History of CD DBTs

In article ,
Andrew Haley wrote:

Audio_Empire wrote:
My experience is that with speakers, DBTs really aren't necessary.


I don't think it's quite that, it's more that it's very hard to do.
Harman famously made a machine that could quickly exchange speakers so
that they could be compared the same position. With an opaque cloth,
this removed the physical appearance of the speakers from the
comparison so that the speakers could be evaluated by sound alone.
Audio reviewers could do the same.

Andrew.


So that the speakers could be in the same position? The problem with
this seems obvious.

  #32   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 14, 8:14=A0pm, Barkingspyder wrote:

The facts as I have discovered are this, that when components perform wit=

hin proper parameters nobody can hear a difference reliably. =A0When they o=
perate outside of those parameters they
can be equalized so that they do and then the differences are no longer d=

etectable. =A0You are entitled to you opinion of course, just recognize tha=
t it is at odds with what is known. =A0If it
sounds different it's either because it is not designed to perform the wa=

y it should or it's broken.

What you have discovered there are your personal opinions not facts.
And you are welcome to those opinions. But the "fact" is there is
plenty of components that have a distinctive sonic signature even
while operating within their intended limitations. Plenty of tube amps
and analog source components have distinctive sonic signatures that
you can't duplicate with EQ. These components are neither "broken" nor
are they failing to perform as designed. As to how components *should*
be designed to perform is a matter of opinion not a matter of fact as
well. Ultimately the criteria is whether or not the consumer likes
what they hear. We as audiophiles are under no obligation to tailor or
preferences to some arbitrary standards of measured performance.

  #33   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 16, 10:10=A0am, Jenn wrote:
In article ,
=A0Andrew Haley wrote:

Audio_Empire wrote:
My experience is that with speakers, DBTs really aren't necessary.


I don't think it's quite that, it's more that it's very hard to do.
Harman famously made a machine that could quickly exchange speakers so
that they could be compared the same position. =A0With an opaque cloth,
this removed the physical appearance of the speakers from the
comparison so that the speakers could be evaluated by sound alone.
Audio reviewers could do the same.


Andrew.


So that the speakers could be in the same position? =A0The problem with
this seems obvious.


As has been pointed out. There really is no need to do ABX DBTs for
speakers. However the idea of doing blind preference tests for
speakers I think is quite worthwhile. But....it is also incredibly
difficult to do without stacking the deck. This *is* where level
matching becomes quite a complicated issue. And as you are alluding to
so does speaker position. Add to that the room itself which may favor
one speaker system over another and you have a very very difficult
task. And then of course the physical aspect of changing out speakers
quickly without giving away which are which.

Certainly the HK facility which allows for quick switching double
blind comparisons is state of the art. But even that has it's
limitations. Unfortunately I think the methodologies used there are
very problematic as well. But that is another thread.
  #34   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Sunday, December 16, 2012 6:52:40 AM UTC-8, Andrew Haley wrote:
Audio_Empire wrote:

My experience is that with speakers, DBTs really aren't necessary.




I don't think it's quite that, it's more that it's very hard to do.

Harman famously made a machine that could quickly exchange speakers so

that they could be compared the same position. With an opaque cloth,

this removed the physical appearance of the speakers from the

comparison so that the speakers could be evaluated by sound alone.

Audio reviewers could do the same.



Andrew.


I still maintain that DBTs aren't necessary for speaker evaluation, but I do
maintain that it is very necessary (and eminently desirable) to evaluate
speakers in your OWN listening room. That's what is difficult to do (not
that DBTs on speakers would be easy or convenient) because: (A) few
dealerships will let you borrow large speakers, and (B) even if you did find
an accommodating dealer, floor standers are usually heavy and difficult to
transport. That leaves only "mini monitor" types.


  #35   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Sunday, December 16, 2012 10:10:44 AM UTC-8, Scott wrote:
On Dec 14, 8:14=A0pm, Barkingspyder wrote:
=20
=20
=20
The facts as I have discovered are this, that when components perform w=

ithin proper parameters nobody can hear a difference reliably. =A0When they=
operate outside of those parameters they
=20
can be equalized so that they do and then the differences are no longer=

detectable. =A0You are entitled to you opinion of course, just recognize t=
hat it is at odds with what is known. =A0If it
=20
sounds different it's either because it is not designed to perform the =

way it should or it's broken.
=20
=20
=20
What you have discovered there are your personal opinions not facts.
=20
And you are welcome to those opinions. But the "fact" is there is
=20
plenty of components that have a distinctive sonic signature even
=20
while operating within their intended limitations. Plenty of tube amps
=20
and analog source components have distinctive sonic signatures that
=20
you can't duplicate with EQ. These components are neither "broken" nor
=20
are they failing to perform as designed. As to how components *should*
=20
be designed to perform is a matter of opinion not a matter of fact as
=20
well. Ultimately the criteria is whether or not the consumer likes
=20
what they hear. We as audiophiles are under no obligation to tailor or
=20
preferences to some arbitrary standards of measured performance.


Ultimately, what most of us are after, is a system that sounds to us like r=
eal
music played in a real space. There are many different versions of that goa=
l
and they are each valid to SOME listener. Some like their sound lush and ro=
mantic
and those people are drawn to classic tube designs for their electronics (l=
ike=20
a Chinese Yaqin MC-100B), others like their sound cool and analytical and=
=20
they would go for some solid-state design known for that kind of sound (lik=
e=20
Krell) and some would want their sound as neutral as possible (Nelson Pass)=
..
As you say, all of these amps have different sonic signatures, and those=20
signatures don't necessarily reveal themselves in a DBT test (although some
will). Most of the differences are very subtle and many are, in the final
analysis, trivial. But one can definitely hear the difference between a Yaq=
in=20
MC-100b and a Krell i300 because while the Yaqin sounds very lush and
"musical", it is definitely "colored" and the Krell is squeaky clean. Horse=
s
for courses and all that.


  #36   Report Post  
Posted to rec.audio.high-end
Andrew Haley Andrew Haley is offline
external usenet poster
 
Posts: 155
Default A Brief History of CD DBTs

Jenn wrote:
In article ,
Andrew Haley wrote:

Audio_Empire wrote:
My experience is that with speakers, DBTs really aren't necessary.


I don't think it's quite that, it's more that it's very hard to do.
Harman famously made a machine that could quickly exchange speakers so
that they could be compared the same position. With an opaque cloth,
this removed the physical appearance of the speakers from the
comparison so that the speakers could be evaluated by sound alone.
Audio reviewers could do the same.


So that the speakers could be in the same position? The problem with
this seems obvious.


The advantage of this technique is that it makes it possible for the
speakers to be in the same position, and it allows them to be
exchanged quickly. This allows very short-term auditory memory to be
used. I can't see any disadvantage: it's not as if you're forced to
have them in the same position,

Andrew.

  #37   Report Post  
Posted to rec.audio.high-end
Arny Krueger[_5_] Arny Krueger[_5_] is offline
external usenet poster
 
Posts: 239
Default A Brief History of CD DBTs

"Scott" wrote in message
...
On Dec 14, 8:17 pm, Barkingspyder wrote:


The nice thing about testing for difference as ABX does is that if there
is no difference detected you know that the more expensive one is not any
better sounding. Unless it has features you feel you must have or
you just like the look
better you can save some money. Personally, I like knowing that a
$2000.00 set of electronics is not going to be out performed by a
$20,000.00 set. Speakers of course, (the part that you actually hear in
a sound system)
are another story entirely.


heck if it makes you feel better about buying less expensive gear I guess
that's nice.


That comment seems to be descending a steeply downward angled nose. ;-)

But you are putting way too much weight on such a test if you think you
walk away from a single null result "knowing"
that the more expensive gear is not better sounding.


Ignores the fact that we are repeatedly told that hyper-expensive equipment
sounds "mind blowingly" better and that one has to be utterly tasteless to
not notice the difference immediately.

Also ignores the fact that all known objective bench testing and its
interpretation in conjunction with our best and most recent knowlege of
psychoacoustics says that no audible differences can be reasonably be
expected to be heard.

But hey, if it makes you happy that's great.


It makes me happy to know that the best available current science actually
works out in the real world and that technological progress is still taking
place.

It makes me happy that good sound can be available to the masses if they
throw off the chains of tradition and ignorance.

I am also happy to see recognition of the fact that simply throwing vast
piles of money at solving problems that have been solved for a long time
doesn't help solve them. If we could only convince our politicians of that!
;-)

But not everyone is on board with you there.


Exactly. Those who have invested heavily in anti-science probably did so
because they are in some state of being poorly informed or are in denial of
the relevant scientific facts. There can be very little rational that can be
said to change their minds because rational thought has nothing to do with
what they currently believe.


  #38   Report Post  
Posted to rec.audio.high-end
Arny Krueger[_5_] Arny Krueger[_5_] is offline
external usenet poster
 
Posts: 239
Default A Brief History of CD DBTs

"Audio_Empire" wrote in message
...
On Saturday, December 15, 2012 8:08:32 AM UTC-8, Scott wrote:

My sentiments exactly. I'm convinced that while DBTs work great for drug
tests, tests by food manufacturers about new or altered products, etc.,
I'm not terribly sure that they work for audio equipment because the
waveform that we are "analyzing" with our collective ears is pretty
complex.


Anybody who has seen how certain tightly held but anti-scientific beliefs
are readily deconstructed using the results of bias-controlled listening
tests can see how people who keep on holding onto those beliefs would have
reservations about such a clear source of evidence that disagrees with them.

Quote:
Otherwise for DACs, preamps and amps, there are certainly differences (in
DACs, especially) yet they don't show-up in DBTs and ABX tests.
On balance we have a world that is full of DACs with better than +/- 0.1 dB
frequency response over the actual audible range and 100 dB dynamic range.
They now show up in $100 music players and $200 5.1 channel AVRs. Where
in fact are the audible differences in those DACs supposed to be coming
from?

Quote:
Granted, with modern, solid-state amps and preamps the differences are
minute (and largely inconsequential), but they do show themselves in
properly set up DBT tests.
No adequate documentation of the above alleged fact has been seen around
here AFAIK.




  #39   Report Post  
Posted to rec.audio.high-end
Arny Krueger[_5_] Arny Krueger[_5_] is offline
external usenet poster
 
Posts: 239
Default A Brief History of CD DBTs

"Scott" wrote in message
...
On Dec 14, 8:21 pm, Audio_Empire wrote:

The person who was questioning to value of level matching did not seem
to be limiting his opinion to CDPs and amps.


Seems like the backwards side of the argument. Doing comparisons of music
players, DACs and amps without proper level matching seems to be the prelude
to a massive waste of time. If the levels are not matched well enough then
there will be audible differences, but we have no way of knowing that the
causes are not our poor testing practices as opposed to any relevent
property of the equipment being tested.

You still have the same
problems in level matching that I stated above when dealing with
loudspeakers. In fact you have even more problems with radiation
pattern differences and room interfaces that make it even more
impossible to do a true level match.


The known technical differences among loudspeakers are immense and gross
compared to those among music players, DACs and amps. I know of nobody who
claims that speakers can be sonically indistinguishable except in limited,
trivial cases. I don't know how this fact relates to a thread about "A brief
history of CD DBT" except as a distraction or red herring argument.


  #40   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Sunday, December 16, 2012 7:45:12 PM UTC-8, Scott wrote:
On Dec 16, 10:10=A0am, Jenn wrote:
=20
In article ,

=20
=A0Andrew Haley wrote:

=20

=20
Audio_Empire wrote:

=20
My experience is that with speakers, DBTs really aren't necessary.

=20

=20
I don't think it's quite that, it's more that it's very hard to do.

=20
Harman famously made a machine that could quickly exchange speakers s=

o
=20
that they could be compared the same position. =A0With an opaque clot=

h,
=20
this removed the physical appearance of the speakers from the

=20
comparison so that the speakers could be evaluated by sound alone.

=20
Audio reviewers could do the same.

=20

=20
Andrew.

=20

=20
So that the speakers could be in the same position? =A0The problem with

=20
this seems obvious.

=20
=20
=20
As has been pointed out. There really is no need to do ABX DBTs for
=20
speakers. However the idea of doing blind preference tests for
=20
speakers I think is quite worthwhile. But....it is also incredibly
=20
difficult to do without stacking the deck. This *is* where level
=20
matching becomes quite a complicated issue. And as you are alluding to
=20
so does speaker position. Add to that the room itself which may favor
=20
one speaker system over another and you have a very very difficult
=20
task. And then of course the physical aspect of changing out speakers
=20
quickly without giving away which are which.
=20
=20
=20
Certainly the HK facility which allows for quick switching double
=20
blind comparisons is state of the art. But even that has it's
=20
limitations. Unfortunately I think the methodologies used there are
=20
very problematic as well. But that is another thread.


I was once in a stereo store in London England where they had two=20
turntables (speaker Lazy-Susans?), half of which were hidden by a=20
false wall, so that as the turntables turned, speakers (R & L) would=20
emerge from behind the false wall, and the turntable would stop=20
with the pair of speakers side-by-side about 6 ft apart and they=20
would be connected to the amplifier automatically. If you wanted=20
hear another pair, the sales guy would push a button and the two=20
turntables would turn and another pair would emerge and then=20
lock into place. How they hooked them up and automatically=20
changed the connections from speaker set to speaker set as=20
each came into position, I can only guess. They must have had=20
some kind of commutator arrangement on the underside of the=20
two "Lazy Susans." I have never seen it done that way anywhere=20
else - and I'm not sure a high-end store would want to do it
that way for fear that the commutator arrangement would=20
compromise the sound by virtue of introducing a set of=20
contacts between the amplifier and the speakers.
Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Rx for DBTs in hobby magazines ... LOt"S ;-) George M. Middius[_4_] Audio Opinions 154 May 23rd 08 04:08 AM
A laundry-list of why DBTs are used Steven Sullivan Audio Opinions 12 November 28th 05 05:49 AM
Good old DBTs [email protected] Audio Opinions 5 July 12th 05 06:31 PM
Articles on Audio and DBTs in Skeptic mag Steven Sullivan High End Audio 6 May 17th 05 02:08 AM
Power Conditioners - DBTs? Jim Cate High End Audio 2 November 5th 03 02:48 AM


All times are GMT +1. The time now is 11:05 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"