Reply
 
Thread Tools Display Modes
  #41   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Monday, December 17, 2012 6:48:54 AM UTC-8, Arny Krueger wrote:
"Audio_Empire" wrote in message

...

On Saturday, December 15, 2012 8:08:32 AM UTC-8, Scott wrote:



My sentiments exactly. I'm convinced that while DBTs work great for drug


tests, tests by food manufacturers about new or altered products, etc.,


I'm not terribly sure that they work for audio equipment because the


waveform that we are "analyzing" with our collective ears is pretty


complex.




Anybody who has seen how certain tightly held but anti-scientific beliefs

are readily deconstructed using the results of bias-controlled listening

tests can see how people who keep on holding onto those beliefs would have

reservations about such a clear source of evidence that disagrees with them.


Well, first of all, Those "beliefs" that are are saddling me with are not "anti-scientific".
There are differences in electronic equipment and I'm convinced that some day there
will be tests that will reveal them. I've been in electronics long enough to know that
you will never uncover a piece of gear's flaws if your suit of measurements keep
measuring the wrong thing. Unfortunately, I don't know (any more than anyone else)
what we would test to account for the differences in modern amps (very small
differences, probably not worth the effort) or DACs (much larger differences). None
of these things are addressed in any test suite I've seen. Yes, we measure frequency
response, IM and harmonic distortion, channel separation, impulse response (in
DACs) perhaps we use an oscilloscope to look at square waves to measure low and
high frequency phase shift, but none of those really address things like the difference
between the imaging ability of two DACs, for instance, Where one of them has a more
three-dimensional image presentation that the other especially since both DACs
measure similar channel separation (which is so high in digital that as to ostensibly
be, for all practical purposes, beyond the limits of the human ear to perceive that
kind of isolation of right and left). Obviously, there is something that we humans
are not measuring.



Quote:

Otherwise for DACs, preamps and amps, there are certainly differences (in

DACs, especially) yet they don't show-up in DBTs and ABX tests.




On balance we have a world that is full of DACs with better than +/- 0.1 dB

frequency response over the actual audible range and 100 dB dynamic range.

They now show up in $100 music players and $200 5.1 channel AVRs. Where

in fact are the audible differences in those DACs supposed to be coming

from?


That's the puzzlement isn't it? Like I said, if the accepted suite of audio measurements
don't answer the questions, then obviously there is something that we don't measure.
It's the only plausible answer (and don't posture that these differences are imaginary;
the product of listening biases, because they aren't.

Quote:

Granted, with modern, solid-state amps and preamps the differences are

minute (and largely inconsequential), but they do show themselves in

properly set up DBT tests.




No adequate documentation of the above alleged fact has been seen around
here AFAIK.


I agree. It's a puzzlement. I know that I, and several other audio enthusiasts of
my acquaintance can tell the difference between two amps in a carefully set-up
DBT almost every time. Yet. others in these ad-hoc tests seem to hear no differences,
and their results are essentially random, I.E. a null result. The only thing that I can
come up with is that I have been listening critically to different components for so
long, that I pick-up on audible clues that others simply miss.
  #42   Report Post  
Posted to rec.audio.high-end
Dick Pierce[_2_] Dick Pierce[_2_] is offline
external usenet poster
 
Posts: 151
Default A Brief History of CD DBTs

Audio_Empire wrote:
That's the puzzlement isn't it? Like I said, if the accepted suite of audio measurements
don't answer the questions, then obviously there is something that we don't measure.
It's the only plausible answer (and don't posture that these differences are imaginary;
the product of listening biases, because they aren't.


But you have, in effect, stated elsewhere in this thread that
as far as you are concerned, the only "accepted suite of audio
measurements" for, say, power amplifiers is power, frequency
response, and distortion, yet, for decades, far more has not
only been available, it has been routinnely used.

Some of the measurements, for example, TIM, have been shown to
be irrelevant because they force conditions that are so utterly
unrealistic that the tell us nothing at all about the performance
of systems under conditions of listening to signal, oh, like music.
Other, like "damping factor" have been shown to not only be
irrelevant, but useless, except in the most pathological of
cases.

Some other measurements, like multi-tone intermodulation, may have
more relevance.

However, what we see hashed over and over again are manufacturers
specification masquerading as "measurements." Fine, we all agree they
are not the same. So why do we see them trotted out time and again,
erected as a strawman do be knocked down, and for what purpose?

If you want to talk about measurements, fine. do so.

But the "accepted quite" of audio measurements in the high-end
audio world, is QUITE different than the "accepted suite" of
audio measurements in a much bigger, richer and certainly much
more informed world than the tiny clique of high-end audio
affords.

--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+

  #43   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

On Monday, December 17, 2012 5:41:27 PM UTC-5, Audio_Empire wrote:

Well, first of all, Those "beliefs" that are are saddling me with are not=

"anti-scientific".
There are differences in electronic equipment and I'm convinced that some=

day there
will be tests that will reveal them. I've been in electronics long enough=

to know that
you will never uncover a piece of gear's flaws if your suit of measuremen=

ts keep=20
measuring the wrong thing. Unfortunately, I don't know (any more than any=

one else)
what we would test to account for the differences in modern amps (very sm=

all=20
differences, probably not worth the effort) or DACs (much larger differen=

ces). None
of these things are addressed in any test suite I've seen. Yes, we measur=

e frequency
response, IM and harmonic distortion, channel separation, impulse respons=

e (in
DACs) perhaps we use an oscilloscope to look at square waves to measure l=

ow and=20
high frequency phase shift, but none of those really address things like =

the difference
between the imaging ability of two DACs, for instance, Where one of them =

has a more
three-dimensional image presentation that the other especially since both=

DACs=20
measure similar channel separation (which is so high in digital that as t=

o ostensibly
be, for all practical purposes, beyond the limits of the human ear to per=

ceive that=20
kind of isolation of right and left). Obviously, there is something that =

we humans=20
are not measuring. =20


This is not obvious at all. First, amps and DACs are not mysteries of natur=
e; they are man-made objects. If we couldn't measure their performance, we =
could not design them in the first place. I'm fairly certain that the poste=
r here does not know how to design audio gear, so perhaps it is all magic t=
o him. That would explain his viewpoint.

Second, there really isn't that much to measure. An audio signal, like all =
electrical signals, has only two attributes: amplitude and frequency. (Note=
that an eardrum's movement has the same two attributes.) We can be fairly =
certain that we are measuring amplitude and frequency quite accurately. The=
re's really nothing missing.

Finally, what seals the case is that our two methods of assessing audibilit=
y--measurements and DBTs--agree with each other. That's how science validat=
es itself--by finding multiple confirmations of the same conclusions. If AE=
were right, then BOTH our measurements AND our listening tests would have =
to be flawed, and flawed in the same way. That would be a very strange thin=
g, given that they were developed independently.

bob

  #44   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 17, 6:43*am, "Arny Krueger" wrote:
"Scott" wrote in message

...
On Dec 14, 8:17 pm, Barkingspyder wrote:



The nice thing about testing for difference as ABX does is that if there
is no difference detected you know that the more expensive one is not any
better sounding. Unless it has features you * * feel you must have or
you just like the look
better you can save some money. Personally, I like knowing that a
$2000.00 set of electronics is not going to be out performed by a
$20,000.00 set. Speakers of course, (the part that you actually hear in
a sound system)
are another story entirely.

heck if it makes you feel better about buying less expensive gear I guess
that's nice.


That comment seems to be descending a steeply downward angled nose. ;-)


Quite the contrary. I am actually happy to see people enjoying their
hobby. It is in the end a perceptual based en devour. If believing
everything sounds the same makes one happy that is great. If believing
wrapping the cat in tin foil and freezing pictures of your grandma
makes your system sound better that is great to. Unless you are the
cat. But misrepresenting science is not OK. I do take issue with
that.
..

*But you are putting way too much weight on such a test if you think you
walk away from a single null result "knowing"
that the more expensive gear is not better sounding.


Ignores the fact that we are repeatedly told that hyper-expensive equipment
sounds "mind blowingly" better and that one has to be utterly tasteless to
not notice the difference immediately.


And here is a classic case in point. You are getting ready to wave the
science flag again in this post and here you are suggesting that a
proper analysis of data would include taking audiophile banter into
account. Understanding he true significance of a single null result
does not require consideration of you or anyone else has been told by
other audiophiles. For that to affect the weight placed on any single
test result would quite unscientific thinking.


Also ignores the fact that all known objective bench testing and its
interpretation in conjunction with our best and most recent knowlege of
psychoacoustics says that no audible differences can be reasonably be
expected to be heard.


And here we have a gross misrepresentation of the facts.



*But hey, if it *makes you happy that's great.


It makes me happy to know that the best available current science actually
works out in the real world and that technological progress is still taking
place.


Makes me happy too. Not sure what has to do with my post though. I
suppose indirectly we should both be happy that the best available
current science is built on a rigorous execution of the scientific
method and an understanding of the weight that should be given to any
single result of any given piece of research. It makes me happy that
real scientists know better than to ever make claims of fact based on
a single null result.


It makes me happy that good sound can be available to the masses if they
throw off the chains of tradition and ignorance.


So it's a good thing that Stereo Review is dead then. :-)



But not everyone is on board with you there.


Exactly. Those who have invested heavily in anti-science probably did so
because they are in some state of being poorly informed or are in denial of
the relevant scientific facts. There can be very little rational that can be
said to change their minds because rational thought has nothing to do with
what they currently believe.


And there you go waving that science flag again. It's OK as far as I
am concerned that you believe whatever you want to believe about
audio. But I will continue to call you out on your constant
misrepresentations of real science.

  #45   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 17, 6:49=A0am, "Arny Krueger" wrote:
"Scott" wrote in message

...
On Dec 14, 8:21 pm, Audio_Empire wrote:

The person who was questioning to value of level matching did not seem
to be limiting his opinion to CDPs and amps.


Seems like the backwards side of the argument. Doing comparisons of music
players, DACs and amps without proper level matching seems to be the prel=

ude
to a massive waste of time. If the levels are not matched well enough the=

n
there will be audible differences, but we have no way of knowing that the
causes are not our poor testing practices as opposed to any relevent
property of the equipment being tested.


Why on earth would you cut out all the relevant discussion and then
post the obvious which has already been covered? I already stated that
level matching is essential in any ABX DBT of the above components
since the goal of an ABX test is only to test for audible differences
and not preferences
..

You still have the same
problems in level matching that I stated above when dealing with
loudspeakers. In fact you have even more problems with radiation
pattern differences and room interfaces that make it even more
impossible to do a true level match.


The known technical differences among loudspeakers are immense and gross
compared to those among music players, DACs and amps. I know of nobody wh=

o
claims that speakers can be sonically indistinguishable except in limited=

,
trivial cases. I don't know how this fact relates to a thread about "A br=

ief
history of CD DBT" except as a distraction or red herring argument.


I suggest you follow the thread more closely if you think this is a
red herring argument rather than a relevant point in regards to issues
raised in this thread by another poster when it comes to use of DBTs
for determining preferences And the relative merits and difficulties
of level matching that one has to deal with in doing blind preference
comparison tests with things that really cant be truly level matched
due to differences in dynamic range and frequency response among other
things.



  #46   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default A Brief History of CD DBTs

On 12/17/2012 8:46 PM, Scott wrote:
On Dec 17, 6:43 am, "Arny Krueger" wrote:
"Scott" wrote in message

...
On Dec 14, 8:17 pm, Barkingspyder wrote:

snip

But you are putting way too much weight on such a test if you think you
walk away from a single null result "knowing"
that the more expensive gear is not better sounding.


Ignores the fact that we are repeatedly told that hyper-expensive equipment
sounds "mind blowingly" better and that one has to be utterly tasteless to
not notice the difference immediately.


And here is a classic case in point. You are getting ready to wave the
science flag again in this post and here you are suggesting that a
proper analysis of data would include taking audiophile banter into
account.


In this instance, as Arny presented it, it would not be "banter", but
would, rather, define the null hypothesis. I.e., instead of being
"there are no audible differences", it becomes "there are no major,
unmistakeable audible differences". In a "typical" audiophile scenario,
these are the differences described. How many of these claims are
"unmistakeable", "not at all subtle" etc. In constructing the null
hypothesis of any test these qualifiers cannot be casually ignored.

This is, to me, the heart of the stereotypical subjectivist argument
against DBT or ABX testing - the differences are claimed as obvious
sighted, but then become obscured by any imposed test rigor. In testing
any such claim, the magnitude of the difference (e.g. "obvious to anyone
with ears") defines the precision and detectability requirements of the
test design.

Understanding he true significance of a single null result
does not require consideration of you or anyone else has been told by
other audiophiles.


That would rest entirely upon how the null hypothesis is constructed,
and may indeed include such claims.

For that to affect the weight placed on any single
test result would quite unscientific thinking.


Again, simply not accurate with respect to the world of possible
hypotheses. Any null result for a discrimination test evaluating
"obvious" differences will be significant, if not dispository, for that
test and equipment, as long as the test is set up properly.


Also ignores the fact that all known objective bench testing and its
interpretation in conjunction with our best and most recent knowlege of
psychoacoustics says that no audible differences can be reasonably be
expected to be heard.


And here we have a gross misrepresentation of the facts.



But hey, if it makes you happy that's great.


It makes me happy to know that the best available current science actually
works out in the real world and that technological progress is still taking
place.


Makes me happy too. Not sure what has to do with my post though. I
suppose indirectly we should both be happy that the best available
current science is built on a rigorous execution of the scientific
method and an understanding of the weight that should be given to any
single result of any given piece of research. It makes me happy that
real scientists know better than to ever make claims of fact based on
a single null result.

Sorry, but you seem to be using a rather unique definition of "fact" as
"real scientists" make claims of fact for every such study. The results
*are* facts, and are true, and applicable, within the constraints and
confidence interval of the test design. To believe otherwise would
require a refutation of statistics. If you doubt this, then please
explain exactly how many tests are required to result in "facts".

Keith

  #47   Report Post  
Posted to rec.audio.high-end
Dick Pierce[_2_] Dick Pierce[_2_] is offline
external usenet poster
 
Posts: 151
Default A Brief History of CD DBTs

wrote:

Second, there really isn't that much to measure. An audio signal,
like all electrical signals, has only two attributes: amplitude and
frequency.


Actually, to be moe precise, the two fundamental attributes are
ampliude and time, but your point remains: it's not that it's
some higher-order dimensional thingy with some of those dimensions
hidden. From the amplitude vs time signal we can derive other
information such as the amplitude vs frequency your mention.
Two ways come to mind of doing this: through various mathematical
transforms or through a process called hearing.

Extended to a DAC or a typical power amplifer, this two-dimensional
pr0oblem becomes a three-dimensional one: the amplitude of the right
channel and the amplitude of the left channel vs time (we assume the
two channel use the same time :-). There's nothing else going on,
there is no other "hidden channel" for "hidden information."

But NONE of this is a mystery, at least not to those in the signal
processing, acoustical/psychophysical realm.

It may well be a mystery (and often times seems like it is) to those
in high-end audio. But, I'd assert, that's an education problem.

--
+--------------------------------+
+ Dick Pierce |
+ Professional Audio Development |
+--------------------------------+

  #48   Report Post  
Posted to rec.audio.high-end
Arny Krueger[_5_] Arny Krueger[_5_] is offline
external usenet poster
 
Posts: 239
Default A Brief History of CD DBTs

"Audio_Empire" wrote in message
...

There are differences in electronic equipment and I'm convinced that some
day there
will be tests that will reveal them.


That the equpment is different is fact. The question at hand is not about
that fact. The question at hand is about the audible significance of those
differences.

The use of audio gear seems to be pretty straight-forward and simple. We
apply audio signals to audio gear, turn it into sound in listening rooms
using loudspeakers and headphones, and listen to it.

The symmetry between listening tests and listening to music for enjoyment
can be as complete as we have the patience to make it so.

It is ironic to me that much of the so-called evidence supporting the
existence of mysterious equipment properties that elude sophisticated
testing is obtained by such crude means. It even eludes all known attempts
to duplicate the crude means while imposing basic a few basic, simple bias
controls by the least intrusive means found after extensive investigation
and experimentation.

If you are talking about technical tests then the solution to our problem
can be found in multivariate calculus. It is a mathematical fact that any
system with a finite dimensional state can be fully analyzed. An audio
channel has two variables being time and intensity. It is very simple.
Mathematicians have analyzed these two variables for maybe 100 years
(analysis acutally started no less recently than with Fourier).


I've been in electronics long enough to know that
you will never uncover a piece of gear's flaws if your suit of
measurements keep
measuring the wrong thing.


That's a truism, but without more specifics it is just idle speculation.

Unfortunately, I don't know (any more than anyone else)
what we would test to account for the differences in modern amps (very
small
differences, probably not worth the effort) or DACs (much larger
differences).


What differences are we testing for - things that only show up in sighted
evaluations or evaluations that are semi-, demi-, or quasi controlled?

Once we learned how to do reliable listening tests back in the 1970s there
have been no mysteries - what we hear we measure and vice versa given that
we measure enough to be audible.

As others have pointed out one of the first casualties of reliable listening
tests was the hysteria over slew rate induced distoriton.

None of these things are addressed in any test suite I've seen.


None of what? So far I see no actual description of something with hands and
feet.

Yes, we measure frequency
response, IM and harmonic distortion, channel separation, impulse response
(in
DACs) perhaps we use an oscilloscope to look at square waves to measure
low and
high frequency phase shift, but none of those really address things like
the difference
between the imaging ability of two DACs, for instance,


Yet another audiophile myth that dies a quick death when you start doing
adequately controlled listening tests.



  #49   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 18, 4:09*am, KH wrote:
On 12/17/2012 8:46 PM, Scott wrote:









On Dec 17, 6:43 am, "Arny Krueger" wrote:
"Scott" wrote in message


...
On Dec 14, 8:17 pm, Barkingspyder wrote:


snip

* But you are putting way too much weight on such a test if you think you
walk away from a single null result "knowing"
that the more expensive gear is not better sounding.


Ignores the fact that we are repeatedly told that hyper-expensive equipment
sounds "mind blowingly" better and that one has to be utterly tasteless to
not notice the difference immediately.


And here is a classic case in point. You are getting ready to wave the
science flag again in this post and here you are suggesting that a
proper analysis of data would include taking audiophile banter into
account.


In this instance, as Arny presented it, it would not be "banter", but
would, rather, define the null hypothesis. *I.e., instead of being
"there are no audible differences", it becomes "there are no major,
unmistakeable audible differences". *In a "typical" audiophile scenario,
these are the differences described. *How many of these claims are
"unmistakeable", "not at all subtle" etc. *In constructing the null
hypothesis of any test these qualifiers cannot be casually ignored.

This is, to me, the heart of the stereotypical subjectivist argument
against DBT or ABX testing - the differences are claimed as obvious
sighted, but then become obscured by any imposed test rigor. *In testing
any such claim, the magnitude of the difference (e.g. "obvious to anyone
with ears") defines the precision and detectability requirements of the
test design.


Well thank goodness in real science researchers know better than to
move the goal posts due to trash talking between audiophiles. I would
think that if objectivists were genuinely interested in applying
science to the question of amplifier sound they would not move the
goal posts nor would they use ABX DBTs the way they have when it comes
to amplifier sound. That being typically breaking out ABX and failing
to ever control for same sound bias or even calibrate the sensitivity
of the test. Without such calibration a single null result tells us
very little about what was and was not learned about the sound of the
components under test.

But of course my point was the fact that no scientist worth his or her
salt would ever make dogmatic claims of fact based on the results of
any single ABX DBT null. And if one think that claims from
subjectivists should alter that fact then they simply don't understand
how real science deals with and interprets real scientific data.

Understanding he true significance of a single null result
does not require consideration of you or anyone else has been told by
other audiophiles.


That would rest entirely upon how the null hypothesis is constructed,
and may indeed include such claims.


No it does not. Real science builds it's conclusions on an
accumulation of research. Again if one understands how science works
they should know the real standing of one singular null result. That
being it is most certainly not something one can reasonably close the
books on and say that it is final proof of no difference.

For that to affect the weight placed on any single
test result would quite unscientific thinking.


Again, simply not accurate with respect to the world of possible
hypotheses. *Any null result for a discrimination test evaluating
"obvious" differences will be significant, if not dispository, for that
test and equipment, as long as the test is set up properly.


Sorry but you are plainly wrong. No scientist would ever put that much
stock in one test. It runs contrary to the very idea of
falsifiability, peer review or the idea of verification via repetition
of previous tests.very very unscientific













Also ignores the fact that all known objective bench testing and its
interpretation in conjunction with our best and most recent knowlege of
psychoacoustics says that no audible differences can be reasonably be
expected to be heard.


And here we have a gross misrepresentation of the facts.


* But hey, if it *makes you happy that's great.


It makes me happy to know that the best available current science actually
works out in the real world and that technological progress is still taking
place.


Makes me happy too. Not sure what has to do with my post though. I
suppose indirectly we should both be happy that the best available
current science is built on a rigorous execution of the scientific
method and an understanding of the weight that should be given to any
single result of any given piece of research. It makes me happy that
real scientists know better than to ever make claims of fact based on
a single null result.


Sorry, but you seem to be using a rather unique definition of "fact" as
"real scientists" make claims of fact for every such study.


Complete nonsense. And you say this after bring up the null
hypothesis. You might want to read up on the null hypothesis and what
it proves and what it does not prove.
http://en.wikipedia.org/wiki/Null_hypothesis

*The results
*are* facts, and are true, and applicable, within the constraints and
confidence interval of the test design. *To believe otherwise would
require a refutation of statistics. *If you doubt this, then please
explain exactly how many tests are required to result in "facts".


No the results are not facts the results are data. Often in this kind
of research one will find conflicting data. That is what no one who
understands these kinds of things would ever draw a conclusion of fact
from a single test. To say it would be a hasty conclusion would be an
understatement.

  #50   Report Post  
Posted to rec.audio.high-end
Arny Krueger[_5_] Arny Krueger[_5_] is offline
external usenet poster
 
Posts: 239
Default A Brief History of CD DBTs

"Scott" wrote in message
...

Well thank goodness in real science researchers know better than to
move the goal posts due to trash talking between audiophiles. I would
think that if objectivists were genuinely interested in applying
science to the question of amplifier sound they would not move the
goal posts nor would they use ABX DBTs the way they have when it comes
to amplifier sound.


The last above seems to show considerable bias. It seems to say that no
"true scientist" has ever done an ABX test of an amplfiier.

That being typically breaking out ABX and failing
to ever control for same sound bias


I've done some checking and the phrase "sound bias" appears to a a
contrivance of its author. It has no standard defined meaning that I know of
or can find in the literature of audio.

or even calibrate the sensitivity of the test.


I've explained the many ways the the results of blind listening tests to
date have been confirmed, calibrated double and triple checked on RAHE many
times. Amnesia?

But of course my point was the fact that no scientist worth his or her
salt would ever make dogmatic claims of fact based on the results of
any single ABX DBT null.


At this point 100s if not 1,000s of ABX tests have been performed so the
claim that any claims dogmatic or otherwise would be based on the results of
far more than just one test. Straw man argument.





  #51   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

On Tuesday, December 18, 2012 9:35:07 AM UTC-5, Dick Pierce wrote:

Second, there really isn't that much to measure. An audio signal,
like all electrical signals, has only two attributes: amplitude and
frequency.


Actually, to be moe precise, the two fundamental attributes are
ampliude and time,


Granted, but to us non-scientists, it is easier to think in terms of amplitude and frequency, because they correspond to concepts we are readily familiar with (i.e., loudness and pitch).

snip

But NONE of this is a mystery, at least not to those in the signal
processing, acoustical/psychophysical realm.

It may well be a mystery (and often times seems like it is) to those
in high-end audio. But, I'd assert, that's an education problem.


Miseducation is precisely the problem. And the audiophile rags have a lot to answer for on that score.

bob

  #52   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 18, 11:48*am, "Arny Krueger" wrote:
"Scott" wrote in message

...

Well thank goodness in real science researchers know better than to
move the goal posts due to trash talking between audiophiles. I would
think that if objectivists were genuinely interested in applying
science to the question of amplifier sound they would not move the
goal posts nor would they use ABX DBTs the way they have when it comes
to amplifier sound.


The last above seems to show considerable bias.


In your biased opinion :-)

It seems to say that no
"true scientist" has ever done an ABX test of an amplfiier.


That is an odd interpretation. It certainly is not what i was saying
at all. There are many scientists in this world any number of whom are
audiophiles. I would hardly make any claim that none of them have ever
done an ABX test of an amplifier. i have no idea what every scientist
is doing in their spare time for fun. OTOH I have yet to see a peer
reviewed paper published in any scientific journal of ABX tests done
on amplifiers. I would hope though, that any such scientific journal
would call out any such paper should it be shown that the participants
knew in advance what amplifiers A and B were and nothing was done to
control for a possible same sound bias and nothing was done to
demonstrate the test was actually sensitive to real and subtle audible
differences should the result be a null. So if there are such studies
that you know of please cite them. I would be very interested in
reading them.



*That being typically breaking out ABX and failing
to ever control for same sound bias


I've done some checking and the phrase "sound bias" appears to a a
contrivance of its author. It has no standard defined meaning that I know of
or can find in the literature of audio.


My goodness. if you cut the phrase in the middle what do you expect?
OTOH you could talk to your friend JJ Johnston on the need or lack
there of for positive and negative controls in DBTs. More specifically
you might ask him if he thinks "same sound bias" is not an issue in an
ABX test if the subject knows in advance what A and B are. Go ahead,
ask him. ;-) I don't think you are going to like the answer....



*or even calibrate the sensitivity of the test.


I've explained the many ways the the results of blind listening tests to
date have been confirmed, calibrated double and triple checked on RAHE many
times. Amnesia?


Please cite how the Stereophile Tests we have been debating calibrated
the test for sensitivity to audible differences.



But of course my point was the fact that no scientist worth his or her
salt would ever make dogmatic claims of fact based on the results of
any single ABX DBT null.


At this point 100s if not 1,000s of ABX tests have been performed so the
claim that any claims dogmatic or otherwise would be based on the results of
far more than just one test. Straw man argument.


Not interested in cherry picked anecdotal evidence from flawed tests.

  #53   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

On Tuesday, December 18, 2012 12:18:29 PM UTC-5, Scott wrote:

That being typically breaking out ABX and failing
to ever control for same sound bias=20


There is no such phenomenon as same sound bias. It has never been demonstra=
ted experimentally. If you have data that shows otherwise, please share it =
with us.

or even calibrate the sensitivity
of the test. Without such calibration a single null result tells us
very little about what was and was not learned about the sound of the
components under test.


There is no need to "calibrate the sensitivity" of an ABX test of audio com=
ponents, anymore than there is a need to calibrate the sensitivity of a DB =
pharmaceutical trial. In both cases, we care only about subjects' sensitivi=
ty to a given dose (or in the case of ABX, a given difference). We aren't t=
rying to determine the minimum dose/difference the subjects might respond t=
o.

Regardless, of course a single null result tells us very little. But my ori=
ginal post did not present a single result. It presented a substantial numb=
er of tests (not all null, btw) conducted over a long period of time by wid=
ely disparate groups.=20

bob

  #54   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Monday, December 17, 2012 6:43:52 AM UTC-8, Arny Krueger wrote:
"Scott" wrote in message

...

On Dec 14, 8:17 pm, Barkingspyder wrote:





The nice thing about testing for difference as ABX does is that if there


is no difference detected you know that the more expensive one is not any


better sounding. Unless it has features you feel you must have or


you just like the look


better you can save some money. Personally, I like knowing that a


$2000.00 set of electronics is not going to be out performed by a


$20,000.00 set. Speakers of course, (the part that you actually hear in


a sound system)


are another story entirely.




heck if it makes you feel better about buying less expensive gear I guess


that's nice.




That comment seems to be descending a steeply downward angled nose. ;-)



But you are putting way too much weight on such a test if you think you


walk away from a single null result "knowing"


that the more expensive gear is not better sounding.




Ignores the fact that we are repeatedly told that hyper-expensive equipment

sounds "mind blowingly" better and that one has to be utterly tasteless to

not notice the difference immediately.


But your "hyper-expensive" gear is not so "mind blowingly" better than the less
expensive gear. It is subtly different, usually marginally cleaner, especially in the
top-end where the highs are less "grainy" and smoother than more run-of-the-
mill components. But many people cannot (or will not) hear the differences.
That's not their fault, really. Honest high-end equipment manufacturers use
the best quality components in their audio gear. They use the best capacitors,
the least noisy resistors, the finest potentiometers, the best grade of switches,
etc. This accounts for SOME of the high prices that these devices demand. And
it usually results in slightly better sound. But as I have stated before, while these
differences do exist, they are so small that if one bought any of them and inserted
them into their systems, after an hour of listening, they would find nothing to
complain about and happily accept the sound they are getting, forgetting any
of the differences that they may have heard in a DBT "shoot-out" between the
amps in question.



Also ignores the fact that all known objective bench testing and its

interpretation in conjunction with our best and most recent knowlege of

psychoacoustics says that no audible differences can be reasonably be

expected to be heard.


Then somebody is measuring the wrong thing.

But hey, if it makes you happy that's great.




It makes me happy to know that the best available current science actually

works out in the real world and that technological progress is still taking

place.



It makes me happy that good sound can be available to the masses if they

throw off the chains of tradition and ignorance.



I am also happy to see recognition of the fact that simply throwing vast

piles of money at solving problems that have been solved for a long time

doesn't help solve them. If we could only convince our politicians of that!

;-)



But not everyone is on board with you there.




Exactly. Those who have invested heavily in anti-science probably did so

because they are in some state of being poorly informed or are in denial of

the relevant scientific facts. There can be very little rational that can be

said to change their minds because rational thought has nothing to do with

what they currently believe.


And those who have invested heavily in the notion that current science has all
the answers, can pinch their pennies and enjoy lesser equipment safe in their
delusion that It's all the same....
  #55   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Monday, December 17, 2012 6:07:17 PM UTC-8, Dick Pierce wrote:
Audio_Empire wrote:

That's the puzzlement isn't it? Like I said, if the accepted suite of audio measurements


don't answer the questions, then obviously there is something that we don't measure.


It's the only plausible answer (and don't posture that these differences are imaginary;


the product of listening biases, because they aren't.




But you have, in effect, stated elsewhere in this thread that

as far as you are concerned, the only "accepted suite of audio

measurements" for, say, power amplifiers is power, frequency

response, and distortion, yet, for decades, far more has not

only been available, it has been routinnely used.


If you "got" that from any of my posts, then may I recommend remedial
reading comprehension, or perhaps that you read with less interpretive
imagination. Because to my knowledge, I've not inferred or stated anything
of the kind. I did say that these are things that are most often quoted
in spec sheets, but I never said that it was enough just to measure these things.



Some of the measurements, for example, TIM, have been shown to

be irrelevant because they force conditions that are so utterly

unrealistic that the tell us nothing at all about the performance

of systems under conditions of listening to signal, oh, like music.

Other, like "damping factor" have been shown to not only be

irrelevant, but useless, except in the most pathological of

cases.


Yes, that is true, but often they are still quoted as specs. That was
all I was inferring.



Some other measurements, like multi-tone intermodulation, may have

more relevance.



However, what we see hashed over and over again are manufacturers

specification masquerading as "measurements." Fine, we all agree they

are not the same. So why do we see them trotted out time and again,

erected as a strawman do be knocked down, and for what purpose?


Because most buyers are not technical and good specs are impressive,
perhaps? I don't pretend to know.

If you want to talk about measurements, fine. do so.


I merely mentioned the suite of tests that is the most ubiquitous.
I don't particularly want to talk about them because generally
we don't know the circumstances under which many published
measurements are made. And without context, they can be misleading
and even meaningless.




But the "accepted suite" of audio measurements in the high-end

audio world, is QUITE different than the "accepted suite" of

audio measurements in a much bigger, richer and certainly much

more informed world than the tiny clique of high-end audio

affords.



Yeah, but who cares? In the first place, Consumers don't understand them
and in the second place, most modern audio equipment measures so superlatively,
that any differences heard wouldn't correspond to those measurements
anyway.


  #56   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Tuesday, December 18, 2012 7:03:31 AM UTC-8, Arny Krueger wrote:
"Audio_Empire" wrote in message

...



There are differences in electronic equipment and I'm convinced that some


day there


will be tests that will reveal them.




That the equpment is different is fact. The question at hand is not about

that fact. The question at hand is about the audible significance of those

differences.



The use of audio gear seems to be pretty straight-forward and simple. We

apply audio signals to audio gear, turn it into sound in listening rooms

using loudspeakers and headphones, and listen to it.


If it were that easy, there would perfect systems which would create
perfect facsimiles of the actual recorded event. I've never heard anyone
say that such-and-such a system was indistinguishable from the real
thing. Nor Have I ever heard anyone say that they mistook reproduced
music for live music. The most convincing I've ever heard was a pair
of the recent Wilson Alexandria XLF speakers driven by a pair of VTL
Siegfried II 800 Watt/channel power amps and a dCS Debussy CD
rig. It was good, I've never heard a pair of speakers load a room like
the XLFs. Impressive, but even with really good source material (like
my own recording of the Stanford University Jazz orchestra (made with
a single, big capsule stereo mike) it never fooled me into thinking that
it was anything but a very good recording, and the commercial stuff
they were demonstrating with in that hotel meeting room was even less
convincing. Si suggest that you revisit your statement that it's simple.



The symmetry between listening tests and listening to music for enjoyment

can be as complete as we have the patience to make it so.


That's true, but the word patience is the operative one here. Nobody
running these tests seem to take into account that soundstage
and imaging of say. a DAC, would be better served if the source
material actually had some real sound-stage engineered into it.
Frankly, due to the taste of most of the listeners involved in these tests,
good source material, material that would show things like
differences in soundstage presentation, generally aren't used. Also,
the people conducting such tests are so hung-up on instantaneous
A/B comparison that they never stick with one DUT or the other
long enough for the listening panel to focus-in on any differences
that aren't instantly recognizable.

It is ironic to me that much of the so-called evidence supporting the

existence of mysterious equipment properties that elude sophisticated

testing is obtained by such crude means. It even eludes all known attempts

to duplicate the crude means while imposing basic a few basic, simple bias

controls by the least intrusive means found after extensive investigation

and experimentation.


Ironic is it? I'd use another word, I think.



If you are talking about technical tests then the solution to our problem

can be found in multivariate calculus. It is a mathematical fact that any

system with a finite dimensional state can be fully analyzed. An audio

channel has two variables being time and intensity. It is very simple.

Mathematicians have analyzed these two variables for maybe 100 years

(analysis acutally started no less recently than with Fourier).


Like I said earlier, if it's so simple, how come NO stereo system, regardless
of price or acoustical setting can create a convincing facsimile of a real
performance playing in a real space? If you've ever been to New Orleans
and walked down Bourbon Street on a warm evening, you will have noticed,
as you walk down the sidewalk passing the open doors of this establishment
or that establishment, that you can tell in an instant, without even seeing the
source, in which establishments live music was playing, and in which
establishments the music is canned. The worlds finest stereo system, one
with state-of-the-art BIG speakers costing as much as a new Ferrari, simply
cannot convince anyone that the sound is real.

I've been in electronics long enough to know that


you will never uncover a piece of gear's flaws if your suit of


measurements keep


measuring the wrong thing.




That's a truism, but without more specifics it is just idle speculation.


So "trisim" = speculation?


Unfortunately, I don't know (any more than anyone else)


what we would test to account for the differences in modern amps (very


small


differences, probably not worth the effort) or DACs (much larger


differences).




What differences are we testing for - things that only show up in sighted

evaluations or evaluations that are semi-, demi-, or quasi controlled?


Doesn't matter, but they do show up in carefully controlled tests as long as
the source material is of sufficient quality to allow these differences to be
heard, and as long as the testers aren't looking ONLY for differences that
reveal themselves in quick A/B comparisons.




Once we learned how to do reliable listening tests back in the 1970s there

have been no mysteries - what we hear we measure and vice versa given that

we measure enough to be audible.




As others have pointed out one of the first casualties of reliable listening

tests was the hysteria over slew rate induced distoriton.



None of these things are addressed in any test suite I've seen.




None of what? So far I see no actual description of something with hands and

feet.


This is your selective editing at work, methinks.

Yes, we measure frequency


response, IM and harmonic distortion, channel separation, impulse response


(in


DACs) perhaps we use an oscilloscope to look at square waves to measure


low and


high frequency phase shift, but none of those really address things like


the difference


between the imaging ability of two DACs, for instance,




Yet another audiophile myth that dies a quick death when you start doing

adequately controlled listening tests.


That's just it. Mr. Kruger, IT DOESN'T die either a quick death or a slow one.
Also what it doesn't do is show up immediately on quick A/B tests. It also requires
that the recording used in the evaluation actually have some imaging specificity.
Most of the DBTs where I've been a listener, are pop, rock, and jazz recordings
which are studio creations, that, at the very best, are all multimiked, multi-
channel affairs and at worst either have no acoustic interments in them or,
have been Frapped! No imaging specificity there!
  #57   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

On Tuesday, December 18, 2012 5:21:19 PM UTC-5, Audio_Empire wrote:

Doesn't matter, but they do show up in carefully controlled tests as long as
the source material is of sufficient quality to allow these differences to be
heard, and as long as the testers aren't looking ONLY for differences that
reveal themselves in quick A/B comparisons.


So where are these tests you keep talking about? I started this thread with a fairly healthy list of well-documented tests that show the opposite. You keep saying there are other tests that support your position, but you haven't presented even an iota of data to support this. Until you do . . .

bob

  #58   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 18, 12:18*pm, wrote:
On Tuesday, December 18, 2012 12:18:29 PM UTC-5, Scott wrote:
That being typically breaking out ABX and failing
to ever control for same sound bias


There is no such phenomenon as same sound bias.


That is plainly wrong. Biases come in all sorts of flavors including a
bias towards components sounding the same.

It has never been demonstrated experimentally. If you have data that shows otherwise, please share it with us.


Well, I am so glad you asked. I have some pretty good data of one
clear cut example of same sound bias at work.
Let's take a trip down memory lane with Mr. Howard Ferstler and an
article he wrote for The Sensible Sound in which he dud an ABX DBT
between two amplifiers and concluded that it demonstrated the two
sounded the same. Let's look a little closer at what really went down

issue 88 of The $ensible Sound (Nov/Dec
2001, pp.10-17)

Howard wrote in his article on page 14

"According to the statistical analysis, and given the number of
trials I did, the likelihood of those scores being theresult of
anything but chance (even the one where I scored more than 60%right)
exceeded 95%." "Even though a 68% correct score looks like there may
have been significant audible differences with the 17 out of 25
mindnumbing trials I did, that score does achieve a 95% confidence
level, indicating that the the choices were still attributable to
chance."

John Atkinson point out to him the following facts.

“As has been pointed on this newsgroups, not only by myself but also
by Arny Krueger, you were misrepresenting the results, presumably
because they were "blatantly at odds with [your] belief systems." Yes,
scoring 17 out of 25 in a blind test does almost reach the 95%
confidence level (94.6%, to be pedantic). But this means that there is
almost 19 chances in 20 that you _did_ hear a difference between the
amplifiers. You incorrectly wrote in a published article that your
scoring 17 out of 25 was more than 95% due to chance. However it's
actually almost 95% not_ due to chance. In other words, your own tests
suggested you heard a difference, but as you already "knew" there
wasn't an audible difference, you drew the wrong conclusion from your
own data.
Curiously, The Sensible Sound has yet to publish a retraction. :-)”
John Atkinson
Editor, Stereophile

So we have here a classic example of same sound bias affecting the
analysis of the data of an ABX DBT between amps. But wait, it gets
better. Check out how Howard tries to reconcile his positive result
with his same sound bias.

Howard Ferstler:

" The data you are referring to was but a small part of the series.
It was a fluke, because during the last part of that series of trials
I was literally guessing. I just kept pushing the button and making
wild stabs at what I thought I heard. After a while, I did not bother
to listen at all. I just kept pressing the same choice over and
over."

IOW he was deliberately falsifying data in order to get a null result.
I’d say that is proof positive of a same sound bias on the part of Mr.
Ferstler wouldn’t you? And this ABX DBT was published in The Sensible
Sound despite the fact that the analysis was corrupted by a clear same
sound bias but so was the data, deliberately!
Ironically, due to an apparent malfunction in Tom Nousaine’s ABX box
the attempt at spiking the results to get a null serendipitously
wrought a false positive. So on top of that we have a mal functioning
ABX box that Tom Nousiane has been using for all these ABX DBTs.

Didn’t you at some point cite this very test and other tests conducted
with Tom Nousaine’s ABX box as "scientific evidence?"

Ouch.


or even calibrate the sensitivity
of the test. Without such calibration a single null result tells us
very little about what was and was not learned about the sound of the
components under test.


There is no need to "calibrate the sensitivity" of an ABX test of audio components, anymore than there is a need to calibrate the sensitivity of a DB pharmaceutical trial.


My goodness gracious talk about getting it all wrong. First ABX DBTs
involves playback equipment. Pharaceutical trials do not so there is
nothing to "calibrate" in pharmeceutical trials. BUT they do use
control groups! That is in effect their calibration. without the
control group the results mean nothing because there is no
"calibrated" base to compare them to. So in effect they most
defintiely are calibrated or they are tossed out as very very bad
science and just plain junk. That is bias controlled testing 101.

In both cases, we care only about subjects' sensitivity to a given dose (or in the case of ABX, a given difference). We aren't trying to determine the minimum dose/difference the subjects
might respond to.


Wrong! In the pharmaceutical tests we don't care a bit about a
subject's sensitivity to a given dose. we care about the subject's
sensitivity as compared to the *control group* That is the
calibration!


Regardless, of course a single null result tells us very little.


Gosh that is what i have been saying. So you agree. Great.


But my original post did not present a single result. It presented a substantial number of tests (not all null, btw) conducted over a long period of time by widely disparate groups.


And my comments about how it is very unscientific to put so much
weight in one null result was not a response to your original post.

However I do have to ask, did you include any of the tests by Howard
Ferstler? That would be most unfortunate. Did you include tests
conducted with Tom Nousaine's defective ABX box? That would also be
unfortunate. Funny what we learn when we dig a little. Such is the
point of peer review. To say that some the evidence presented in these
audio magazines is anecdotal is to be overly generous. That should be
obvious after what The Sensible Sound allowed to pass and be reported
as an ABX DBT of amplifiers.

  #59   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default A Brief History of CD DBTs

On 12/18/2012 10:18 AM, Scott wrote:
On Dec 18, 4:09 am, KH wrote:
On 12/17/2012 8:46 PM, Scott wrote:


On Dec 17, 6:43 am, "Arny Krueger" wrote:
"Scott" wrote in message


...
On Dec 14, 8:17 pm, Barkingspyder wrote:


snip

But you are putting way too much weight on such a test if you think you
walk away from a single null result "knowing"
that the more expensive gear is not better sounding.


Ignores the fact that we are repeatedly told that hyper-expensive equipment
sounds "mind blowingly" better and that one has to be utterly tasteless to
not notice the difference immediately.


And here is a classic case in point. You are getting ready to wave the
science flag again in this post and here you are suggesting that a
proper analysis of data would include taking audiophile banter into
account.


In this instance, as Arny presented it, it would not be "banter", but
would, rather, define the null hypothesis. I.e., instead of being
"there are no audible differences", it becomes "there are no major,
unmistakeable audible differences". In a "typical" audiophile scenario,
these are the differences described. How many of these claims are
"unmistakeable", "not at all subtle" etc. In constructing the null
hypothesis of any test these qualifiers cannot be casually ignored.

This is, to me, the heart of the stereotypical subjectivist argument
against DBT or ABX testing - the differences are claimed as obvious
sighted, but then become obscured by any imposed test rigor. In testing
any such claim, the magnitude of the difference (e.g. "obvious to anyone
with ears") defines the precision and detectability requirements of the
test design.


Well thank goodness in real science researchers know better than to
move the goal posts due to trash talking between audiophiles.


Well, some of us *are* engaged in *real* science on a daily basis, and
do understand the precepts.

I would
think that if objectivists were genuinely interested in applying
science to the question of amplifier sound they would not move the
goal posts nor would they use ABX DBTs the way they have when it comes
to amplifier sound.


The thread has nothing to do with "amplifier" sound.

That being typically breaking out ABX and failing
to ever control for same sound bias or even calibrate the sensitivity
of the test. Without such calibration a single null result tells us
very little about what was and was not learned about the sound of the
components under test.


Careful reading would show I clearly stipulated such requirements need
to be defined and accounted for. Arguing in favor of my stated position
isn't much of a refutation.


But of course my point was the fact that no scientist worth his or her
salt would ever make dogmatic claims of fact based on the results of
any single ABX DBT null. And if one think that claims from
subjectivists should alter that fact then they simply don't understand
how real science deals with and interprets real scientific data.


The "dogmatic" claims, as you describe them, were based on physics and
engineering principles, and the fact that listening tests, under
controlled conditions, have not shown results that dispute those
principles. There was no claim, as I read it, that any individual test
was applicable to all conditions. Quite the opposite in fact - where
are the tests that contradict the the physics and engineering principles?

Understanding he true significance of a single null result
does not require consideration of you or anyone else has been told by
other audiophiles.


That would rest entirely upon how the null hypothesis is constructed,
and may indeed include such claims.


No it does not. Real science builds it's conclusions on an
accumulation of research.


No, every test has a conclusion, and is dispositive, if executed
accurately, within the limitations of the specific test.

Again if one understands how science works
they should know the real standing of one singular null result. That
being it is most certainly not something one can reasonably close the
books on and say that it is final proof of no difference.


The "books" are clearly closed on that test group, under those test
conditions. To think otherwise is to deny the relevance of all tests
under all conditions.


For that to affect the weight placed on any single
test result would quite unscientific thinking.


Again, simply not accurate with respect to the world of possible
hypotheses. Any null result for a discrimination test evaluating
"obvious" differences will be significant, if not dispository, for that
test and equipment, as long as the test is set up properly.


Sorry but you are plainly wrong. No scientist would ever put that much
stock in one test. It runs contrary to the very idea of
falsifiability, peer review or the idea of verification via repetition
of previous tests.very very unscientific


Nonsense. Do one tox study and argue that 90% severe adverse effects
doesn't mean anything. See how far that gets you. And, in any event,
that has zero to do with falsifiability. The results of any study stand
on their own unless and until they are demonstrated to be suspect, or
wrong. If the test is not designed to be falsifiable, it is a defective
design irrespective of how the data are analyzed or used. Perhaps you
need to brush up on what falsifiability means in test design.

snip

Sorry, but you seem to be using a rather unique definition of "fact" as
"real scientists" make claims of fact for every such study.


Complete nonsense. And you say this after bring up the null
hypothesis. You might want to read up on the null hypothesis and what
it proves and what it does not prove.
http://en.wikipedia.org/wiki/Null_hypothesis


I suggest you follow your own recommendation.


The results
*are* facts, and are true, and applicable, within the constraints and
confidence interval of the test design. To believe otherwise would
require a refutation of statistics. If you doubt this, then please
explain exactly how many tests are required to result in "facts".


No the results are not facts the results are data.


Data *are* objective facts. What do you think they are if not facts?

Often in this kind
of research one will find conflicting data. That is what no one who
understands these kinds of things would ever draw a conclusion of fact
from a single test. To say it would be a hasty conclusion would be an
understatement.


Clearly you need to brush up on what constitutes "data", "facts", and
"conclusions". They are not interchangeable nor fungible. And you are
conflating "facts" with "conclusions". The only relevant conclusion I
saw in the subject post had to do with lack of data contravening known
physical and engineering principles, not citing any single test as
globally applicable.

Keith


  #60   Report Post  
Posted to rec.audio.high-end
Arny Krueger[_5_] Arny Krueger[_5_] is offline
external usenet poster
 
Posts: 239
Default A Brief History of CD DBTs

"Audio_Empire" wrote in message
...

But your "hyper-expensive" gear is not so "mind blowingly" better than the
less
expensive gear.


Misleading misappropriation of the word "your" noted.

It is subtly different, usually marginally cleaner, especially in the
top-end where the highs are less "grainy" and smoother than more
run-of-the-
mill components.


No reliable evidence of this seems to have been provided.

But many people cannot (or will not) hear the differences.
That's not their fault, really. Honest high-end equipment manufacturers
use
the best quality components in their audio gear. They use the best
capacitors,
the least noisy resistors, the finest potentiometers, the best grade of
switches,
etc.


There is no reliable evidence that any of this necessarily has any audible
benefits.

This accounts for SOME of the high prices that these devices demand.


Costs with no benefits = waste.

And it usually results in slightly better sound.


No reliable evidence of this seems to have been provided.

Repeating a false claim does not make it true.

But as I have stated before, while these differences do exist,


Again no reliable evidence of this seems to have been provided and of course
repeating a false claim does not make it true.

they are so small


Again no reliable evidence of this seems to have been provided and of course
it is still true repeating a false claim does not make it true.





  #61   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 19, 3:39*am, KH wrote:
On 12/18/2012 10:18 AM, Scott wrote:









On Dec 18, 4:09 am, KH wrote:
On 12/17/2012 8:46 PM, Scott wrote:


On Dec 17, 6:43 am, "Arny Krueger" wrote:
"Scott" wrote in message


...
On Dec 14, 8:17 pm, Barkingspyder wrote:


snip


* *But you are putting way too much weight on such a test if you think you
walk away from a single null result "knowing"
that the more expensive gear is not better sounding.


Ignores the fact that we are repeatedly told that hyper-expensive equipment
sounds "mind blowingly" better and that one has to be utterly tasteless to
not notice the difference immediately.


And here is a classic case in point. You are getting ready to wave the
science flag again in this post and here you are suggesting that a
proper analysis of data would include taking audiophile banter into
account.


In this instance, as Arny presented it, it would not be "banter", but
would, rather, define the null hypothesis. *I.e., instead of being
"there are no audible differences", it becomes "there are no major,
unmistakeable audible differences". *In a "typical" audiophile scenario,
these are the differences described. *How many of these claims are
"unmistakeable", "not at all subtle" etc. *In constructing the null
hypothesis of any test these qualifiers cannot be casually ignored.


This is, to me, the heart of the stereotypical subjectivist argument
against DBT or ABX testing - the differences are claimed as obvious
sighted, but then become obscured by any imposed test rigor. *In testing
any such claim, the magnitude of the difference (e.g. "obvious to anyone
with ears") defines the precision and detectability requirements of the
test design.


Well thank goodness in real science researchers know better than to
move the goal posts due to trash talking between audiophiles.


Well, some of us *are* engaged in *real* science on a daily basis, and
do understand the precepts.


And some of you clearly are not and clearly don't.


I would
think that if objectivists were genuinely interested in applying
science to the question of amplifier sound they would not move the
goal posts nor would they use ABX DBTs the way they have when it comes
to amplifier sound.


The thread has nothing to do with "amplifier" sound.


Then take it up with the moderators. The subject has been brought up
so I addressed it.


That being typically breaking out ABX and failing
to ever control for same sound bias or even calibrate the sensitivity
of the test. Without such calibration a single null result tells us
very little about what was and was not learned about the sound of the
components under test.


Careful reading would show I clearly stipulated such requirements need
to be defined and accounted for. *Arguing in favor of my stated position
isn't much of a refutation.


Careful reading *of the entire thread* would show that 1. Other people
besides you are involved. 2. Others have stipulated such requirements
are either unnecessary or don't exist at all. Just read the quoted
text in this post. It's there and because it's there it's relevant




But of course my point was the fact that no scientist worth his or her
salt would ever make dogmatic claims of fact based on the results of
any single ABX DBT null. And if one think that claims from
subjectivists should alter that fact then they simply don't understand
how real science deals with and interprets real scientific data.


The "dogmatic" claims, as you describe them, were based on physics and
engineering principles,



Really? Once again we have a bogus waving of the science flag. Do tell
us what "physics" stands behind the claim? And let me remind of just
what that claim was to begiin with. In this thread it was claimed On
Dec 14, 8:17 pm, Barkingspyder wrote: "The
nice thing about testing for difference as ABX does is that if there
is no difference detected you know that the more expensive one is not
any better sounding." So please show us how this claim was based on
physics and engineering principles. In what part of physics is it
stated that one can draw hard conclusions from one null result done at
home? What engineering principle supports this claim?


and the fact that listening tests, under
controlled conditions, have not shown results that dispute those
principles.


Please cite the principles you are refering to and the actual
listening tests. Hopefully for your sake you are not going to cite the
listening tests published in The Sensible Sound. ;-)

*There was no claim, as I read it, that any individual test
was applicable to all conditions.


You might want to read this again then. On Dec 14, 8:17 pm,
Barkingspyder wrote: "The nice thing about
testing for difference as ABX does is that if there is no difference
detected you know that the more expensive one is not any better
sounding."

*Quite the opposite in fact - where
are the tests that contradict the the physics and engineering principles?


There you go waving the science flag again with nothing of substance
behind it. Please cite the physics and engineering principles you
believe support the claim that "The nice thing about testing for
difference as ABX does is that if there is no difference detected you
know that the more expensive one is not any better sounding." After
all, this is the specific claim I was challenging and others
apparently, including yourself, are defending.


Understanding he true significance of a single null result
does not require consideration of you or anyone else has been told by
other audiophiles.


That would rest entirely upon how the null hypothesis is constructed,
and may indeed include such claims.


No it does not. Real science builds it's conclusions on an
accumulation of research.


No, every test has a conclusion, and is dispositive, if executed
accurately, within the limitations of the specific test.


Within the limitations of the specific test. And within the
limitations of a home brewed ABX test one can not reasonable conclude
from a single null result that "if there is no difference detected you
know that the more expensive one is not any better sounding." That is
an erroneous and very unscientific conclusion.


Again if one understands how science works
they should know the real standing of one singular null result. That
being it is most certainly not something one can reasonably close the
books on and say that it is final proof of no difference.


The "books" are clearly closed on that test group, under those test
conditions. *To think otherwise is to deny the relevance of all tests
under all conditions,


"that test group" being what? All tests being what? All conditions
being what? Your claim is way overly vague to even address.



For that to affect the weight placed on any single
test result would quite unscientific thinking.


Again, simply not accurate with respect to the world of possible
hypotheses. *Any null result for a discrimination test evaluating
"obvious" differences will be significant, if not dispository, for that
test and equipment, as long as the test is set up properly.


Sorry but you are plainly wrong. No scientist would ever put that much
stock in one test. It runs contrary to the very idea of
falsifiability, peer review or the idea of verification via repetition
of previous tests.very very unscientific


Nonsense.


Nonsense to your claim of nonsense.

*Do one tox study and argue that 90% severe adverse effects
doesn't mean anything.


Hold on here. You are putting words in my mouth. Where did I say the
test results of a single null "doesn't mean anything." Please quote
me. This is a typical straw man argument.

*See how far that gets you.


It wouldn't get me v ery far but I know better than to do that. But
that is not what i am doing here.

*And, in any event,
that has zero to do with falsifiability. *The results of any study stand
on their own unless and until they are demonstrated to be suspect, or
wrong. *If the test is not designed to be falsifiable, it is a defective
design irrespective of how the data are analyzed or used. *Perhaps you
need to brush up on what falsifiability means in test design.


Perhaps you need to be reminded again of the original claim I was
disputing.

On Dec 14, 8:17 pm, Barkingspyder wrote:
"The nice thing about testing for difference as ABX does is that if
there is no difference detected you know that the more expensive one
is not any better sounding."



snip



Sorry, but you seem to be using a rather unique definition of "fact" as
"real scientists" make claims of fact for every such study.


Complete nonsense. And you say this after bring up the null
hypothesis. You might want to read up on the null hypothesis and what
it proves and what it does not prove.
http://en.wikipedia.org/wiki/Null_hypothesis


I suggest you follow your own recommendation.


Oh I did. Here is what it says.
"The null hypothesis can never be proven. Data, such as the results of
an observation or experiment, can only reject or fail to reject a null
hypothesis"
Now what does that say about this claim? "The nice thing about
testing for difference as ABX does is that if there is no difference
detected you know that the more expensive one is not any better
sounding." Did you catch the word "KNOW" in there?




* The results
*are* facts, and are true, and applicable, within the constraints and
confidence interval of the test design. *To believe otherwise would
require a refutation of statistics. *If you doubt this, then please
explain exactly how many tests are required to result in "facts".


No the results are not facts the results are data.


Data *are* objective facts. *What do you think they are if not facts?


In the case of ABX DBTs they are merely results. The fact that any ABX
test can for any number of reasons wrought incorrect results makes it
pretty hard to call the results "facts" If they are facts then when
one ends up with conflicting data from different tests you have
conflicting "facts." do explain how that works.



Often in this kind
of research one will find conflicting data. That is what no one who
understands these kinds of things would ever draw a conclusion of fact
from a single test. To say it would be a hasty conclusion would be an
understatement.


Clearly you need to brush up on what constitutes "data", "facts", and
"conclusions". *They are not interchangeable nor fungible.


And yet you seem to be interchanging them. "Data *are* objective
facts." How ironic is that?

*And you are
conflating "facts" with "conclusions".


I am? Here is the conclusion I am challenging "The nice thing about
testing for difference as ABX does is that if there is no difference
detected you know that the more expensive one is not any better
sounding." His conclusion is a claim of fact. So who exactly is
conflating facts with conclusions?

*The only relevant conclusion I
saw in the subject post had to do with lack of data contravening known
physical and engineering principles, not citing any single test as
globally applicable.


Cherry picking is also very unscientific. if that is the only
conclusion you saw in this thread then you missed the very conclusion
I have challenged in this thread. Just so you don't miss it again.
"The nice thing about testing for difference as ABX does is that if
there is no difference detected you know that the more expensive one
is not any better sounding."


  #62   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

On Wednesday, December 19, 2012 6:39:10 AM UTC-5, Scott wrote:

Well, I am so glad you asked.=20


Not sure why. Let's take a look at the two key elements of the "data" you p=
resent.

You quote Howard Ferstler saying, "Even though a 68% correct score looks li=
ke there may have been significant audible differences with the 17 out of 2=
5 mindnumbing trials I did, that score does achieve a 95% confidence level,=
indicating that the the choices were still attributable to chance."

You quote John Atkinson saying, "In other words, your own tests suggested y=
ou heard a difference..."

Howard is correctly interpreting the statistics here. John is not. A confid=
ence interval is a hard target, not a rough idea you only have to get close=
to.

snip

Howard Ferstler:
=20
" The data you are referring to was but a small part of the series.
It was a fluke, because during the last part of that series of trials
I was literally guessing. I just kept pushing the button and making
wild stabs at what I thought I heard. After a while, I did not bother
to listen at all. I just kept pressing the same choice over and
over."
=20
IOW he was deliberately falsifying data in order to get a null result.
I=92d say that is proof positive of a same sound bias on the part of Mr.
Ferstler wouldn=92t you?=20


No, that's just what happens when you're doing a DBT and you really can't t=
ell the difference. You have to guess. Howard's just being honest here. The=
only alternative is to abandon the test, but the outcome would be the same=
in both cases: No showing of audible difference.

And this ABX DBT was published in The Sensible
Sound despite the fact that the analysis was corrupted by a clear same
sound bias but so was the data, deliberately!
Ironically, due to an apparent malfunction in Tom Nousaine=92s ABX box
the attempt at spiking the results to get a null serendipitously
wrought a false positive. So on top of that we have a mal functioning
ABX box that Tom Nousiane has been using for all these ABX DBTs.


As explained above, there was no malfunction here. The only flaw is in Atki=
nson's interpretation of the results.

snip

My goodness gracious talk about getting it all wrong. First ABX DBTs
involves playback equipment. Pharaceutical trials do not so there is
nothing to "calibrate" in pharmeceutical trials. BUT they do use
control groups! That is in effect their calibration. without the
control group the results mean nothing because there is no
"calibrated" base to compare them to. So in effect they most
defintiely are calibrated or they are tossed out as very very bad
science and just plain junk. That is bias controlled testing 101.


That's not at all what calibration means, but just to humor you, let's pret=
end it is. In a DB drug trial, the intervention group needs to get a statis=
tically better result than the control group. In an ABX test, the subjects =
need to get a statistically better result than chance. If the former is 'ca=
librated," then the latter is, too.

bob

  #63   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Monday, December 17, 2012 6:49:26 AM UTC-8, Arny Krueger wrote:
"Scott" wrote in message

...

On Dec 14, 8:21 pm, Audio_Empire wrote:



The person who was questioning to value of level matching did not seem


to be limiting his opinion to CDPs and amps.




Seems like the backwards side of the argument. Doing comparisons of music

players, DACs and amps without proper level matching seems to be the prelude

to a massive waste of time. If the levels are not matched well enough then

there will be audible differences, but we have no way of knowing that the

causes are not our poor testing practices as opposed to any relevent

property of the equipment being tested.


Also, the louder component will seem to the listening panel to be
"better" than the softer one. Just a dB or so difference is enough to
bias the panel toward the louder one.

You still have the same


problems in level matching that I stated above when dealing with


loudspeakers. In fact you have even more problems with radiation


pattern differences and room interfaces that make it even more


impossible to do a true level match.


In a speaker DBT, the one with more bass (as well as louder) will also
bias the listeners toward it.

[quoted text deleted -- deb]

  #64   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Wednesday, December 19, 2012 7:38:55 AM UTC-8, Arny Krueger wrote:
"Audio_Empire" wrote in message

...



But your "hyper-expensive" gear is not so "mind blowingly" better than the


less


expensive gear.




Misleading misappropriation of the word "your" noted.



It is subtly different, usually marginally cleaner, especially in the


top-end where the highs are less "grainy" and smoother than more


run-of-the-


mill components.




No reliable evidence of this seems to have been provided.



But many people cannot (or will not) hear the differences.


That's not their fault, really. Honest high-end equipment manufacturers


use


the best quality components in their audio gear. They use the best


capacitors,


the least noisy resistors, the finest potentiometers, the best grade of


switches,


etc.




There is no reliable evidence that any of this necessarily has any audible

benefits.



This accounts for SOME of the high prices that these devices demand.




Costs with no benefits = waste.



And it usually results in slightly better sound.




No reliable evidence of this seems to have been provided.



Repeating a false claim does not make it true.



But as I have stated before, while these differences do exist,




Again no reliable evidence of this seems to have been provided and of course

repeating a false claim does not make it true.



they are so small




Again no reliable evidence of this seems to have been provided and of course

it is still true repeating a false claim does not make it true.


Ah, but there is.... just not the kind that dyed-in-the-wool objectivists would be
willing to accept. That's why they are called objectivists. ;^)
  #65   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 19, 9:41*am, wrote:
On Wednesday, December 19, 2012 6:39:10 AM UTC-5, Scott wrote:
Well, I am so glad you asked.


Not sure why. Let's take a look at the two key elements of the "data" you present.

You quote Howard Ferstler saying, "Even though a 68% correct score looks like there may have been significant audible differences with the 17 out of 25 mindnumbing trials I did, that score does achieve a 95% confidence level, indicating that the the choices were still attributable to chance."

You quote John Atkinson saying, "In other words, your own tests suggested you heard a difference..."

Howard is correctly interpreting the statistics here. John is not. A confidence interval is a hard target, not a rough idea you only have to get close to.


Um no, Howard interpreted the data backwards. he took 95% confidence
level to mean that it was a 95% likelihood that his results were due
to chance. The opposite is true. Atkinson was right. Ferstler was
wrong.


snip

Howard Ferstler:


*" The data you are referring to was but a small part of the series.
It was a *fluke, because during the last part of that series of trials
I was literally guessing. I just kept pushing the button and making
wild stabs at what I thought I heard. After a while, I did not bother
to listen at all. I just kept *pressing the same choice over and
over."


IOW he was deliberately falsifying data in order to get a null result.
I’d say that is proof positive of a same sound bias on the part of Mr..
Ferstler wouldn’t you?


No, that's just what happens when you're doing a DBT and you really can't tell the difference.


Nonsense. that si what happens when one tries to spike the data.
Sorry, there is no excuse on earth for someone to do what he did. he
says "I did not bother to listen at all I just kept pressing the same
choice over and over."." That is deliberate corruption of the data.
Done deal. If you can't see that for what it is we got nothin more to
talk about. really. It could not be more blatant.

You have to guess.


He wasn't even guessing. He stopped listening. That is not doing an
ABX DBT properly. That is deliberately spiking data to get the desired
null.

Howard's just being honest here.


Whoa hold here. he is being honest because he couldn't accept his own
result. Truth is his original article was plainly dishonest. If he
were being honest there he would have disclosed the fact that what he
was experiencing was exactly what he expected to experience
(expectation bias incarnate) and that he stopped listening and just
hit the same button. But he knew very well that this would make his
test worthless. But he'd rather admit his test was worthless than live
with the positive result. He just didn't understand the mistake in his
analysis or what the data was really saying so he went forward and
presented tests with deliberately spiked data as legitimate evidence
of amps sounding the same. Do you really think this is good science
much less honest journalism? if so let me fill you in. Any scientist
caught spiking data to gain a desired result is disgraced within the
scientific community.


The only alternative is to abandon the test, but the outcome would be the same in both cases: No showing of audible difference.


How convenient. Circular logic incarnate.

And this ABX DBT was published in The Sensible
Sound despite the fact that the analysis was corrupted by a clear same
sound bias but so was the data, deliberately!
Ironically, due to an apparent malfunction in Tom Nousaine’s ABX box
the attempt at spiking the results to get a null serendipitously
wrought a false positive. So on top of that we have a mal functioning
ABX box that Tom Nousiane has been using for all these ABX DBTs.


As explained above, there was no malfunction here. The only flaw is in Atkinson's interpretation of the results.


Seriously? You think an ABX machine that is giving a positive result
when you hit the same selection over and over again is not
malfunctioning? And again, Atkinson, a former science teacher gets the
analysis dead on. If you don't think so you are dead wrong end of
story.



snip

My goodness gracious talk about getting it all wrong. First ABX DBTs
involves playback equipment. Pharaceutical trials do not so there is
nothing to "calibrate" in pharmeceutical trials. BUT they do use
control groups! That is in effect their calibration. without the
control group the results mean nothing because there is no
"calibrated" base to compare them to. So in effect they most
defintiely are calibrated or they are tossed out as very very bad
science and just plain junk. That is bias controlled testing 101.


That's not at all what calibration means, but just to humor you, let's pretend it is. In a DB drug trial, the intervention group needs to get a statistically better result than the control group. In an ABX test, the subjects need to get a statistically better result than chance. If the former is 'calibrated," then the latter is, too.



Boy you are just getting this so wrong. Let me put this in the most
basic terms. Any such test needs negative and positive controls. what
are the negative controls in the ABX tests in either the Stereo review
tests or Howard Ferstler's ridiculous test? here is another question.
If two components sound different but the the testee *chooses* to
"not bother to listen at all. I just kept pressing the same choice
over and over." Are the results valid? Now let's see you navigate
these questions without using circular reasoning.



  #67   Report Post  
Posted to rec.audio.high-end
KH KH is offline
external usenet poster
 
Posts: 137
Default A Brief History of CD DBTs

On 12/19/2012 10:17 AM, Scott wrote:
On Dec 19, 3:39 am, KH wrote:
On 12/18/2012 10:18 AM, Scott wrote:


Well, some of us *are* engaged in *real* science on a daily basis, and
do understand the precepts.


And some of you clearly are not and clearly don't.


Yes, I am. Are you?

I would
think that if objectivists were genuinely interested in applying
science to the question of amplifier sound they would not move the
goal posts nor would they use ABX DBTs the way they have when it comes
to amplifier sound.


The thread has nothing to do with "amplifier" sound.


Then take it up with the moderators. The subject has been brought up
so I addressed it.


Take it up with the thread TITLE.


That being typically breaking out ABX and failing
to ever control for same sound bias or even calibrate the sensitivity
of the test. Without such calibration a single null result tells us
very little about what was and was not learned about the sound of the
components under test.


Careful reading would show I clearly stipulated such requirements need
to be defined and accounted for. Arguing in favor of my stated position
isn't much of a refutation.


Careful reading *of the entire thread* would show that 1. Other people
besides you are involved. 2. Others have stipulated such requirements
are either unnecessary or don't exist at all. Just read the quoted
text in this post. It's there and because it's there it's relevant


I didn't respond to the "entire" thread, but to a post. If you think
every post has to be responsive to the original post, you will likely be
disappointed.

But of course my point was the fact that no scientist worth his or her
salt would ever make dogmatic claims of fact based on the results of
any single ABX DBT null. And if one think that claims from
subjectivists should alter that fact then they simply don't understand
how real science deals with and interprets real scientific data.


The "dogmatic" claims, as you describe them, were based on physics and
engineering principles,



Really? Once again we have a bogus waving of the science flag. Do tell
us what "physics" stands behind the claim? And let me remind of just
what that claim was to begiin with. In this thread it was claimed On
Dec 14, 8:17 pm, Barkingspyder wrote: "The
nice thing about testing for difference as ABX does is that if there
is no difference detected you know that the more expensive one is not
any better sounding." So please show us how this claim was based on
physics and engineering principles. In what part of physics is it
stated that one can draw hard conclusions from one null result done at
home? What engineering principle supports this claim?


and the fact that listening tests, under
controlled conditions, have not shown results that dispute those
principles.


Please cite the principles you are refering to and the actual
listening tests. Hopefully for your sake you are not going to cite the
listening tests published in The Sensible Sound. ;-)

There was no claim, as I read it, that any individual test
was applicable to all conditions.


You might want to read this again then. On Dec 14, 8:17 pm,
Barkingspyder wrote: "The nice thing about
testing for difference as ABX does is that if there is no difference
detected you know that the more expensive one is not any better
sounding."


Once again, I was responding a specific post and your response. That
would seem pretty obvious.


Quite the opposite in fact - where
are the tests that contradict the the physics and engineering principles?


There you go waving the science flag again with nothing of substance
behind it. Please cite the physics and engineering principles you
believe support the claim that "The nice thing about testing for
difference as ABX does is that if there is no difference detected you
know that the more expensive one is not any better sounding." After
all, this is the specific claim I was challenging and others
apparently, including yourself, are defending.


Again, you need to stay focused on the posts I was responding too if you
want to make sense of the discussion.


Understanding he true significance of a single null result
does not require consideration of you or anyone else has been told by
other audiophiles.


That would rest entirely upon how the null hypothesis is constructed,
and may indeed include such claims.


No it does not. Real science builds it's conclusions on an
accumulation of research.


No, every test has a conclusion, and is dispositive, if executed
accurately, within the limitations of the specific test.


Within the limitations of the specific test. And within the
limitations of a home brewed ABX test one can not reasonable conclude
from a single null result that "if there is no difference detected you
know that the more expensive one is not any better sounding." That is
an erroneous and very unscientific conclusion.


Again if one understands how science works
they should know the real standing of one singular null result. That
being it is most certainly not something one can reasonably close the
books on and say that it is final proof of no difference.


The "books" are clearly closed on that test group, under those test
conditions. To think otherwise is to deny the relevance of all tests
under all conditions,


"that test group" being what? All tests being what? All conditions
being what? Your claim is way overly vague to even address.



For that to affect the weight placed on any single
test result would quite unscientific thinking.


Again, simply not accurate with respect to the world of possible
hypotheses. Any null result for a discrimination test evaluating
"obvious" differences will be significant, if not dispository, for that
test and equipment, as long as the test is set up properly.


Sorry but you are plainly wrong. No scientist would ever put that much
stock in one test. It runs contrary to the very idea of
falsifiability, peer review or the idea of verification via repetition
of previous tests.very very unscientific


Nonsense.


Nonsense to your claim of nonsense.

Do one tox study and argue that 90% severe adverse effects
doesn't mean anything.


Hold on here. You are putting words in my mouth. Where did I say the
test results of a single null "doesn't mean anything." Please quote
me. This is a typical straw man argument.


Gee, I thought "No scientist would ever put that much
stock in one test." was pretty clear. And please don't toss in the
dodge of "Barkingspyder said XXX", I clearly stated that "Any null
result for a discrimination test evaluating "obvious" differences will
be significant, if not dispository, for that test and equipment, as long
as the test is set up properly.", nothing more or less. Please explain
how that is nonsense in the context presented.



See how far that gets you.


It wouldn't get me v ery far but I know better than to do that. But
that is not what i am doing here.

And, in any event,
that has zero to do with falsifiability. The results of any study stand
on their own unless and until they are demonstrated to be suspect, or
wrong. If the test is not designed to be falsifiable, it is a defective
design irrespective of how the data are analyzed or used. Perhaps you
need to brush up on what falsifiability means in test design.


Perhaps you need to be reminded again of the original claim I was
disputing.


You were not, in the post I responded to, referring to the original post
(or were doing so in a manner to sufficiently cryptic to defy
identification), you were responding to Arny's post.


On Dec 14, 8:17 pm, Barkingspyder wrote:
"The nice thing about testing for difference as ABX does is that if
there is no difference detected you know that the more expensive one
is not any better sounding."



snip



Sorry, but you seem to be using a rather unique definition of "fact" as
"real scientists" make claims of fact for every such study.


Complete nonsense. And you say this after bring up the null
hypothesis. You might want to read up on the null hypothesis and what
it proves and what it does not prove.
http://en.wikipedia.org/wiki/Null_hypothesis


I suggest you follow your own recommendation.


Oh I did. Here is what it says.
"The null hypothesis can never be proven. Data, such as the results of
an observation or experiment, can only reject or fail to reject a null
hypothesis"
Now what does that say about this claim? "The nice thing about
testing for difference as ABX does is that if there is no difference
detected you know that the more expensive one is not any better
sounding." Did you catch the word "KNOW" in there?


OK, so when the null hypothesis is "there are no significant differences
in sound between X and Y" it can be rejected when not true, right? The
qualifier - signficant, un-subtle, unmistakeable, etc. - defines the
sensitivity and precision required for the test (i.e. dismisses all of
the usual dodging and weaving about "forced choice stress", etc.). So,
given that the required sensitivity is trivial to achieve, we have the
following:

1. Physics and engineering principles, along with many years of
audiology and psychoacoustic experimentation provide an objective
threshold level below which differences are not detectable.
2. DBT tests of clearly sufficient sensitivity - given the claims - to
detect such differences if they exist, have all been negative.
3. There are no DBT data to contravene the expected results based on
engineering principles.
4. The null hypothesis is thus *accepted*, not disproven. This is the
basic mistake that most neophytes make. The null hypothesis is NOT that
two items/populations are different, it's that they are NOT different.
Thus one never needs to reject the null hypothesis to confirm
difference, just the opposite.

When one accepts the null hypothesis, one accepts that there is no
difference between the subjects/items/populations. So when you ask "Now
what does that say about this claim?", what is say is that one cannot
reject the null hypothesis that *they are the same*.

So no, accepting the null hypothesis doesn't "prove" there is no
difference. What is shows is that when clearly sensitive enough methods
are employed for evaluation, no differences are found. In this context,
where there are clear objective reasons why there *should* be no
differences, there is no additional burden on the proponents of the null
hypothesis.

snip
And you are
conflating "facts" with "conclusions".


I am? Here is the conclusion I am challenging "The nice thing about
testing for difference as ABX does is that if there is no difference
detected you know that the more expensive one is not any better
sounding." His conclusion is a claim of fact. So who exactly is
conflating facts with conclusions?

The only relevant conclusion I
saw in the subject post had to do with lack of data contravening known
physical and engineering principles, not citing any single test as
globally applicable.


Cherry picking is also very unscientific. if that is the only
conclusion you saw in this thread


I wasn't responding to the entire thread - merely to you, and the post
you replied to. That should have been quite clear.


then you missed the very conclusion
I have challenged in this thread. Just so you don't miss it again.
"The nice thing about testing for difference as ABX does is that if
there is no difference detected you know that the more expensive one
is not any better sounding."


And this is, indeed, accurate in the situation where you have physical
or engineering based information that is corroborated by the ABX data.
ONLY in the presence of contravening data would this conclusion be
suspect. Where are those data?

Keith


  #68   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

On Wednesday, December 19, 2012 10:25:00 PM UTC-5, Scott wrote:
On Dec 19, 9:41=A0am, wrote:


You quote Howard Ferstler saying, "Even though a 68% correct score look=

s like there may have been significant audible differences with the 17 out =
of 25 mindnumbing trials I did, that score does achieve a 95% confidence le=
vel, indicating that the the choices were still attributable to chance."
=20
You quote John Atkinson saying, "In other words, your own tests suggest=

ed you heard a difference..."
=20
Howard is correctly interpreting the statistics here. John is not. A co=

nfidence interval is a hard target, not a rough idea you only have to get c=
lose to.
=20
Um no, Howard interpreted the data backwards. he took 95% confidence
level to mean that it was a 95% likelihood that his results were due
to chance. The opposite is true. Atkinson was right. Ferstler was
wrong.


There is no point in carrying on a discussion about statistics who does not=
understand the most basic principles of statistics.

snip

Seriously? You think an ABX machine that is giving a positive result
when you hit the same selection over and over again is not
malfunctioning?=20


He did not get a positive result. If you refuse to accept that, there is no=
thing more to say.

bob

  #69   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 20, 7:53=A0am, wrote:
On Wednesday, December 19, 2012 10:25:00 PM UTC-5, Scott wrote:
On Dec 19, 9:41=A0am, wrote:
You quote Howard Ferstler saying, "Even though a 68% correct score lo=

oks like there may have been significant audible differences with the 17 ou=
t of 25 mindnumbing trials I did, that score does achieve a 95% confidence =
level, indicating that the the choices were still attributable to chance."

You quote John Atkinson saying, "In other words, your own tests sugge=

sted you heard a difference..."

Howard is correctly interpreting the statistics here. John is not. A =

confidence interval is a hard target, not a rough idea you only have to get=
close to.
Um no, Howard interpreted the data backwards. he took 95% confidence
level to mean that it was a 95% likelihood that his results were due
to chance. The opposite is true. Atkinson was right. Ferstler was
wrong.


There is no point in carrying on a discussion about statistics who does n=

ot understand the most basic principles of statistics.

snip

Seriously? You think an ABX machine that is giving a positive result
when you hit the same selection over and over again is not
malfunctioning?


He did not get a positive result. If you refuse to accept that, there is =

nothing more to say.

bob


This is a really old and tired debate. But I just want to clarify your
position on one thing before *I* close the books on this one. So it is
your position that Howard Ferstler is right when he says that his
results show a *95% confidence level that the results were due to
chance* and John Atkinson is wrong when he says the results show the
opposite, that they show a *95%, or more precisely a 94.6% confidence
level that the results were not due to chance?* Because *that is what
they actually claimed.* Just for the record are you really saying
Howard got that right and John got that wrong?

  #70   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

On Thursday, December 20, 2012 12:43:23 PM UTC-5, Scott wrote:

This is a really old and tired debate. But I just want to clarify your
position on one thing before *I* close the books on this one. So it is
your position that Howard Ferstler is right when he says that his
results show a *95% confidence level that the results were due to
chance* and John Atkinson is wrong when he says the results show the
opposite, that they show a *95%, or more precisely a 94.6% confidence
level that the results were not due to chance?* Because *that is what
they actually claimed.* Just for the record are you really saying
Howard got that right and John got that wrong?


Neither is being precisely correct, but Howard at least got the conclusion =
right: His result did not achieve a 95% confidence level, and therefore he =
cannot reject the null hypothesis. John is, as they say, lying with statist=
ics by trying to reset the confidence level after the fact. Had John said t=
hat there was a 94.6% probability that Howard's result was not due to chanc=
e, he would have been correct. To use the term "confidence level" in this c=
ontext, and to further state that this "suggested" that Howard heard a diff=
erence, is an abuse of statistics. Your repeated claim that Howard got a po=
sitive result is similarly mistaken.

bob



  #71   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default A Brief History of CD DBTs

On Dec 20, 9:43*am, ScottW wrote:
On Dec 19, 7:25*pm, Scott wrote:









On Dec 19, 9:41*am, wrote:


On Wednesday, December 19, 2012 6:39:10 AM UTC-5, Scott wrote:
Well, I am so glad you asked.


Not sure why. Let's take a look at the two key elements of the "data" you present.


You quote Howard Ferstler saying, "Even though a 68% correct score looks like there may have been significant audible differences with the 17 out of 25 mindnumbing trials I did, that score does achieve a 95% confidence level, indicating that the the choices were still attributable to chance."


You quote John Atkinson saying, "In other words, your own tests suggested you heard a difference..."


Howard is correctly interpreting the statistics here. John is not. A confidence interval is a hard target, not a rough idea you only have to get close to.


Um no, Howard interpreted the data backwards. he took 95% confidence
level to mean that it was a 95% likelihood that his results were due
to chance. The opposite is true. Atkinson was right. Ferstler was
wrong.


The quote implies Howard took more than one test (each "test"
consisting of 25 trials). * If he took 20 tests then it is quite
likely that 1 of the 20 will indicate a false positive result when
using 95% confidence as the conclusion.
It wouldn't be the first time Atkinson "cherry-picked" some data.
As far as just "punching a button"...I would agree the test requires
an honest effort
to discern a difference even if consciously...the subject does not
believe one exists.
That might disqualify some people as a useful subject for such a test.


It does imply that. But it wasn't what happened. What was reported in
this particular case was actually one test with 25 trials.



A relatively simple way to control for this is to use multiple test
amps and to include in one test a control pair of amps that the
subject agrees do sound different. *The subject should know this
pairing will occurr but not know when. *Even if the subject decides in
a test that they cannot hear a difference in the amps and start
guessing does not invalidate the results. *Failing to correctly
identify difference in the control pair would.

Scott, I completely agree that that is one very easy and reasonable
way to show an ABX test is in some way sensitive to differences. But
none of these ABX DBTs that are being touted as scientific proof on
the subject did that or anything like that even though all these tests
involved people with very strong opinions that all amps sound the
same. Everyone knew that it was ABX tests of amps. In the one Stereo
Review article about their big amplifier challenge one of the amps was
an old Futterman OTL. Even this amp fell into the sounds the same
category in the analysis of that data. If an underpowered antique OTL
isn't being heard as different that should tell you something about
that set of tests. Since then the objectivist camp has moved the goal
posts. Instead of claiming all amps sound the same they claim that all
amps sound the same or are not working properly to accommodate the
obvious fact that many tube amps most definitely sound different. So
what does the failure of the testees to hear an underpowered antique
Futterman OTL tell you about the sensitivity to differences of those
tests? And yet this old article is still being dragged out as
scientific proof.

And please note that Stereo review did review any number of tube amps
and preamps and claimed in every single case that they all sounded the
same as every other amp and preamp.

  #72   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

On Thursday, December 20, 2012 12:43:32 PM UTC-5, ScottW wrote:

It wouldn't be the first time Atkinson "cherry-picked" some data.


No, and as for criticizing the statistical reporting in other magazines, he=
should perhaps refrain from tossing stones out of his glass house. I recal=
l one breathless report of a DBT proving an audible difference because the =
subjects "heard a difference fully half the time."

bob

  #73   Report Post  
Posted to rec.audio.high-end
Arny Krueger[_5_] Arny Krueger[_5_] is offline
external usenet poster
 
Posts: 239
Default A Brief History of CD DBTs

"Scott" wrote in message
...

But none of these ABX DBTs that are being touted as scientific proof on
the subject...


Since nobody who really understands science and statistics is claiming that
ABX DBTs are scientific proof of anything, you would appear to be arguing
with yourself.

It is fundamental to science that all of its findings are provisional until
better findings are obtained. Therefore the very concept of some kind of
final "scientific proof" is itself nonsense.

In the one Stereo Review article about their big amplifier challenge one of
the amps was an old Futterman OTL. Strike 1


The amp in question was not an old Futterman OTL but rather it was a modern
amplfiier (in new product production at or near the time of the tests) that
happened to pattern itself somewhat after the origional Futterman OTL tubed
amp. There were many differences. If memory serves it contained solid state
devices, perhaps some in the signal path.

If an underpowered antique OTL isn't being heard as different that should
tell you something about that set of tests.


The amp in question was not an antique and was never operated beyond its
realm of linear operation so it was not underpowered. Strike 2

Your lack of technical understanding of OTL tubed amplifiers seems to
include a lack of appreciation for the fact that an OTL amplifier removes
any output transformer from the signal path, thus removing a large source of
inherent nonlinear distoriton and bandwidth limits.

If there was any kind of a tubed amplifier that would be most likely to
sound like an equally transformerless SS amplifier, it might be one
patterned on the old Futterman design. Strike 3.



  #74   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

On Thursday, December 20, 2012 4:16:46 PM UTC-5, Arny Krueger wrote:

It is fundamental to science that all of its findings are provisional until
better findings are obtained.


It's worth noting that my two DBT posts have drawn nearly 100 responses at this point, and yet not a single shred of "better findings" has been presented, despite assurances by at least two posters that such findings exist.

It's almost enough to make you doubt that any counter-evidence exists.

bob

  #75   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Thursday, December 20, 2012 9:43:32 AM UTC-8, ScottW wrote:
On Dec 19, 7:25=A0pm, Scott wrote:
=20
On Dec 19, 9:41=A0am, wrote:

=20

=20
On Wednesday, December 19, 2012 6:39:10 AM UTC-5, Scott wrote:

=20
Well, I am so glad you asked.

=20

=20
Not sure why. Let's take a look at the two key elements of the "data"=

you present.
=20

=20
You quote Howard Ferstler saying, "Even though a 68% correct score lo=

oks like there may have been significant audible differences with the 17 ou=
t of 25 mindnumbing trials I did, that score does achieve a 95% confidence =
level, indicating that the the choices were still attributable to chance."
=20

=20
You quote John Atkinson saying, "In other words, your own tests sugge=

sted you heard a difference..."
=20

=20
Howard is correctly interpreting the statistics here. John is not. A =

confidence interval is a hard target, not a rough idea you only have to get=
close to.
=20

=20
Um no, Howard interpreted the data backwards. he took 95% confidence

=20
level to mean that it was a 95% likelihood that his results were due

=20
to chance. The opposite is true. Atkinson was right. Ferstler was

=20
wrong.

=20

=20
=20
=20
The quote implies Howard took more than one test (each "test"
=20
consisting of 25 trials). If he took 20 tests then it is quite
=20
likely that 1 of the 20 will indicate a false positive result when
=20
using 95% confidence as the conclusion.
=20
It wouldn't be the first time Atkinson "cherry-picked" some data.
=20
As far as just "punching a button"...I would agree the test requires
=20
an honest effort
=20
to discern a difference even if consciously...the subject does not
=20
believe one exists.
=20
That might disqualify some people as a useful subject for such a test.
=20
=20
=20
A relatively simple way to control for this is to use multiple test
=20
amps and to include in one test a control pair of amps that the
=20
subject agrees do sound different. The subject should know this
=20
pairing will occurr but not know when. Even if the subject decides in
=20
a test that they cannot hear a difference in the amps and start
=20
guessing does not invalidate the results. Failing to correctly
=20
identify difference in the control pair would.
=20
=20
=20
ScottW


I still believe that the only way to do a DBT/ABX of something as=20
subtly different as amplifiers should be done with each amp being=20
auditioned for as much as a half hour before switching to the other=20
amp. Use the same cuts from test CDs for each session, played in=20
the same order. Of course careful level matching and strict double-
blindness must still be maintained. I suspect that such a test might
uncover differences that short term, instantaneous switching doesn't
reveal.=20


  #76   Report Post  
Posted to rec.audio.high-end
[email protected] nabob33@hotmail.com is offline
external usenet poster
 
Posts: 54
Default A Brief History of CD DBTs

On Thursday, December 20, 2012 6:37:24 PM UTC-5, Audio_Empire wrote:

I still believe that the only way to do a DBT/ABX of something as
subtly different as amplifiers should be done with each amp being
auditioned for as much as a half hour before switching to the other
amp. Use the same cuts from test CDs for each session, played in
the same order. Of course careful level matching and strict double
blindness must still be maintained. I suspect that such a test might
uncover differences that short term, instantaneous switching doesn't
reveal.


As long as I've been reading RAHE (which is going on 15 years, I think), I've seen this belief expressed. One of these believers ought to try it sometime. Perhaps they will teach the world of psychoacoustics something. (I am not holding my breath.)

bob

  #77   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Thursday, December 20, 2012 1:16:46 PM UTC-8, Arny Krueger wrote:
"Scott" wrote in message

...



But none of these ABX DBTs that are being touted as scientific proof on


the subject...




Since nobody who really understands science and statistics is claiming that

ABX DBTs are scientific proof of anything, you would appear to be arguing

with yourself.



It is fundamental to science that all of its findings are provisional until

better findings are obtained. Therefore the very concept of some kind of

final "scientific proof" is itself nonsense.



In the one Stereo Review article about their big amplifier challenge one of


the amps was an old Futterman OTL. Strike 1




The amp in question was not an old Futterman OTL but rather it was a modern

amplfiier (in new product production at or near the time of the tests) that

happened to pattern itself somewhat after the origional Futterman OTL tubed

amp. There were many differences. If memory serves it contained solid state

devices, perhaps some in the signal path.


Tube amps are the exception. Many are designed to have the "tube sound" and
a DBT with a good solid-state amp will show definite differences that are by no
means subtle (in thet they stick out like a sore thumb. OTL amps are even more
so. Unless it uses a pair of transistors as the output stage, OTL have a relatively
high output impedance even if you parallel 8 pairs of output tubes! They just
can't be as neutral as a good S-S amp. The only time I've ever heard a OTL amp
sound great was when it was designed to be coupled to an electrostatic speaker.
Talk about a marriage made in heaven (or some-such place) the high output
impedance of the OTL and the high input impedance of the ESL, if designed to
be used together, eliminate two transformers.


If an underpowered antique OTL isn't being heard as different that should


tell you something about that set of tests.


Unless, as, I said above, the output stage is solid-state.

The amp in question was not an antique and was never operated beyond its

realm of linear operation so it was not underpowered. Strike 2


again it depends upon the OTL amp's output impedance. Futterman
did make several hybrid amps with tubed input and solid state
output. He called them Moscode amps. One was 150 Watts/channel and the
other was 300 Watts/Channel. They were called the Moscode 300 and the
Moscode 600 respectively. Was it one of those?



Your lack of technical understanding of OTL tubed amplifiers seems to

include a lack of appreciation for the fact that an OTL amplifier removes

any output transformer from the signal path, thus removing a large source of

inherent nonlinear distoriton and bandwidth limits.


....While introducing a fairly high output impedance unless you parallel a
dozen output tubes, and still you won't get the really low impedance
looking back from the speaker which is common with almost any
solid-state amp. Sometimes that's a good trade-off and sometimes it isn't.


If there was any kind of a tubed amplifier that would be most likely to

sound like an equally transformerless SS amplifier, it might be one

patterned on the old Futterman design. Strike 3.


No, I don't think so. Most OTL amps don't have the speaker dampening characteristics
that SS amps have. IIRC, the Futterman design was an exception and had an output
impedance of something like 0.5 Ohms (don't take that to the bank, I may be mis-
remembering here. It's been a long time since I've auditioned a pair of them. The
only thing that I thought the OTL didn't do as well as a SS amp (or tube amp with output
transformers) is bass. The Futterman OTL amp gave really "wooly" bass.
  #78   Report Post  
Posted to rec.audio.high-end
Audio_Empire[_2_] Audio_Empire[_2_] is offline
external usenet poster
 
Posts: 235
Default A Brief History of CD DBTs

On Thursday, December 20, 2012 6:46:30 PM UTC-8, wrote:
On Thursday, December 20, 2012 6:37:24 PM UTC-5, Audio_Empire wrote:



I still believe that the only way to do a DBT/ABX of something as


subtly different as amplifiers should be done with each amp being


auditioned for as much as a half hour before switching to the other


amp. Use the same cuts from test CDs for each session, played in


the same order. Of course careful level matching and strict double


blindness must still be maintained. I suspect that such a test might


uncover differences that short term, instantaneous switching doesn't


reveal.




As long as I've been reading RAHE (which is going on 15 years, I think), I've seen this belief expressed. One of these believers ought to try it sometime. Perhaps they will teach the world of psychoacoustics something. (I am not holding my breath.)



bob


I have tried it. And if there are any differences, one has a much better
chance of uncovering them if one really listens to the devices being
auditioned. You can't do that when two devices are being swapped
out for each other every few seconds (or even every couple of minutes).
As long as the auditions are truly double-blind, and the levels are carefully
matched to less than a dB, and the same varied demonstration material is
used in each instance, they are still true DBTs.
  #79   Report Post  
Posted to rec.audio.high-end
Arny Krueger[_5_] Arny Krueger[_5_] is offline
external usenet poster
 
Posts: 239
Default A Brief History of CD DBTs

"Audio_Empire" wrote in message
...
On Thursday, December 20, 2012 1:16:46 PM UTC-8, Arny Krueger wrote:


"Scott" wrote in message


The amp in question was not an old Futterman OTL but rather it was a
modern
amplfiier (in new product production at or near the time of the tests)
that
happened to pattern itself somewhat after the origional Futterman OTL
tubed
amp. There were many differences. If memory serves it contained solid
state
devices, perhaps some in the signal path.


Tube amps are the exception.


Contrary to popular belief, all tubed amps aren't the same. ;-)

Many are designed to have the "tube sound" and
a DBT with a good solid-state amp will show definite differences that are
by no
means subtle (in thet they stick out like a sore thumb. OTL amps are even
more
so. Unless it uses a pair of transistors as the output stage, OTL have a
relatively
high output impedance even if you parallel 8 pairs of output tubes! They
just
can't be as neutral as a good S-S amp. The only time I've ever heard a OTL
amp
sound great was when it was designed to be coupled to an electrostatic
speaker.
Talk about a marriage made in heaven (or some-such place) the high output
impedance of the OTL and the high input impedance of the ESL, if designed
to
be used together, eliminate two transformers.


We now have proof of complete ignorance of the actual test conditions and
even the UUTs that were used a well-known ABX test that has been libeled in
this thread.

I see no appropriate reaction to that regrettable fact.

That kind of unrepentant ignorance raises serious doubts about any other
attempts at superior expertise or even basic credibility from the same
source.

The above is just baseless speculation presented as fact.

There is no reason for me to waste my time rebutting what appears to me to
be fantasy. I've got my facts straight. The people posting here from the
scientific viewpoint appear to have their facts straight.



  #80   Report Post  
Posted to rec.audio.high-end
Arny Krueger[_5_] Arny Krueger[_5_] is offline
external usenet poster
 
Posts: 239
Default A Brief History of CD DBTs

"Audio_Empire" wrote in message
...

I still believe that the only way to do a DBT/ABX of something as

subtly different as amplifiers should be done with each amp being
auditioned for as much as a half hour before switching to the other
amp.

Been there, done that.


Use the same cuts from test CDs for each session, played in

the same order.

Been there, done that.

Of course careful level matching and strict double-

blindness must still be maintained.

Been there, done that.


I suspect that such a test might

uncover differences that short term, instantaneous switching doesn't
reveal.

Didn't happen. So much for this round of hoops and sticks. ;-)

This appears to be a terribly unbalanced discussion. On the one side we seem
to have little but denial and speculation. On the other side we have over 35
years of hands-on experience with highly sophisticated real world testing of
dozens of amplifiers and equal numbers of DACs, signal processors and
players, some of it documented in the largest circulation consumer and
professional audio publications around.


Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Rx for DBTs in hobby magazines ... LOt"S ;-) George M. Middius[_4_] Audio Opinions 154 May 23rd 08 04:08 AM
A laundry-list of why DBTs are used Steven Sullivan Audio Opinions 12 November 28th 05 05:49 AM
Good old DBTs [email protected] Audio Opinions 5 July 12th 05 06:31 PM
Articles on Audio and DBTs in Skeptic mag Steven Sullivan High End Audio 6 May 17th 05 02:08 AM
Power Conditioners - DBTs? Jim Cate High End Audio 2 November 5th 03 02:48 AM


All times are GMT +1. The time now is 01:14 AM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"