Reply
 
Thread Tools Display Modes
  #81   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Scott" wrote in message


they were not current for sure. These tests were done in
service of the production of his 1994 remasters of the
Mercury classical catalog. This particular anomly was
discovered when he was testing the physical product from
various manufacturers. He discovered that with certain
CDPs the CDs from some manufaturers were quite less than
transparent compared to the masters. This observation was
later confirmed in a number of other tests conducted by
other parties.


There is an extant AES paper that was presented by Dennis Drake in 1992
presented in its entirety at this URL:

http://www.themusiclab.net/aespaper.pdf

It says:

"As Mrs. Cozart Fine and I began our evaluation sessions in April 1989, it
became
very clear to us that the A/D conversion process was a very critical step in
our production
work. As the producer once described it, the sounds from different
converters were
all different "bowls of soup". We began auditioning every A/D converter that
we could
obtain. Our test methodology was simple: while playing an original master as
source, we
would switch between the direct output of our console and the output of the
digital
chain. The digital chain consisted of the converter under test feeding a
Sony 1630 PCM
Processor. The final link in the chain was the Apogee filter modified D/A
section of the
Sony 1630. At times, we would substitute different D/A converters for
listening evaluations,
but we always returned to the Sony converters or the D/A's of our Panasonic
3500
DAT machine for reference purposes.

"Our monitoring set-up consisted of a Cello Audio Suite feeding balanced
lines to
Cello Performance Amplifiers, which in turn were driving B & W 808 Monitor
Loudspeakers. As we compared the various digital converters to the playback
of the
actual analog source, we found that the soundstage of the orchestra was
always
reduced in width when listening to the digital chain. We also found that
many A/D converters
exhibited a strident string sound, unnatural sounding midrange, and a loss
of air
or ambience around the instruments"

This above formal presentation of the relevant so-called Denni Drake tests
includes many details that are different from what we have seen presented on
RAHE. For one thing, the evaluation was not of CD players, and for another,
there is no evidence of level matching, time synching, or bias controls.


  #82   Report Post  
Posted to rec.audio.high-end
[email protected] khughes@nospam.net is offline
external usenet poster
 
Posts: 38
Default You Tell 'Em, Arnie!

Harry Lavo wrote:
wrote in message ...
Harry Lavo wrote:
wrote in message

snip

snip
I pointed out to you several times that in fact tests of statistical
"difference" are routinely applied to the scalars in using such tests.

For "preference" yes. Showing that the results of group A are or are not
statistically different is not at all the same as a *difference* (i.e. a/b
with other variables - such as participants - held constant) test with
sufficient replicates to evaluate the response statistically.


How can you say that? I was proposing two monadic samples of 300 people
each. Much more statistical;u reliable than seventeen sample individual
tests.


IF the focus of the test is the same. Musical enjoyment is NOT the same
focus as evaluating a (usually) just detectable difference.

In fact the JAES has published peer-reviewed articles that show that
at seventeen samples, even the statistical sampling guidelines commonly used
are in error...slightly in favor of a "null" result.


Again, I'm not questioning population size. But you can sample a
thousand people and the resulting statistics are worthless if the test
is insensitive to the parameter of interest.

Thus the base tests for validation were large samples with proven
statistical differences in total and across attributes, across a chosen
and agreed upon population of audiophiles. All the ABX test had to do
was to show a similar set of overall differences among a fairly large
sameple of people, using 17 point samples, to validate. That seemed too
threatening, I guess.

Threatening, no. Uninformative, certainly. You want to use *as a
reference* a test that a) has not been validated for difference
distinction and b) that focuses on another parameter altogether, i.e.
preference, and c) presents numerous confounding variables ( "...across
attributes..." as you stated above).


This is a test technique used widely and statistical significance is just
that....significant. Your charges are pure fantasy.


OK, where was your monadic or proto-monadic test previously used for
discrimination of low level or just noticeable audible differences?
Show where that's been validated or is routinely used. Ever?

You can't have
statistically significant difference in preference without there being a
difference.


Very true. And *what* is that difference? In your test you cannot
unambiguously attribute the preference to the physical difference in the
A/B systems. Why? Because you're using an indicator
(enjoyment/appreciation/satisfaction) that is not directly linked to the
parameter you're trying to measure. An indicator, as I pointed out
previously, that is clearly influenced by many factors outside of the
physical systems being evaluated.


snip

Looke Keith. People are listening to MUSIC. THEY are asked to rate the
experience. THEY are asked to RATE the reproduced music on a "liking" scale
(that's how they express their "preference".) They are also asked to rate
specific musical attributes (the sound of the violins, for example, or the
sound of the tympani, or the sound of the guitars, etc.)


OK, and...

When you have two large samples, exposed to the same music, and with only
one variable changed (lets say the CD player use) if you get statistically
significant differences in the ratings you KNOW that it is the variable
creating it. The ratings are very similar to those use in ABC/hr, and is
one of the reason it is prefered to ABC...it measures quality differences,
not just differences.


Well, no statistician will ever say you "KNOW" something based on
statistical difference. You know, the old correlation/causation thing?
But Harry, you cannot change only ONE thing in your test. You changed
the CD player, AND the sample population. You're assuming homogeneity
in the sample population relative to a rather esoteric parameter (i.e.
musical enjoyment) that may or may not be correct.


There IS *no direct preference* expressed in monadic testing....


Yes, I got that Harry.

the only way
a preference shows up is because of statistical sampling of two large bases
or respondents.


Yes, I got that too Harry.

That is WHY I proposed this very expensive and cumbersome
test as the "touchstone" demonstrating that a real difference in preference
exists. Isn't that after all what we as audiophiles hope to achieve in our
own testing? And as an added benefit, it is able to give an indication of
in what area of musical reproduction that rating preference exists.

Just like two years ago, you just keep raising the same old cannards and in
the process show you really don't stop to understand the technique I am
expousing. If you doubt it, consult a real experiemental psychologist or
statistician.


Gratuitous ad hominem comment noted. Thanks Harry, that's really
showing how *you* "were not...met by constructive dialog but rather
repeated attempts to ridicule and disparage...". Just like two years
ago, you are making the same basic argument - if I don't agree with you,
I'm just ignorant.


snip

*Your* conclusion: ABX has been shown to be inaccurate, and is thus
discarded. No other conclusion is possible if the monadic preference test
is accepted as a suitable standard.


First, 100 is too small a sample.


It's an "example", not a protocol.

I posited 300, which is generally
accepted as large enough to show small rating differences if they exist due
to the variable under test.
Second, the test in both tests is for positive difference. There is no
sense in a statistical "no difference". The applied statistics are
different, but the concept is the same.


"Positive" difference?


The worst that can happen is that there is "no difference" (signicant at the
95% level) in overall rating, but there are differences in attribute ratings
(again at the 95%+ level). However these are still valuable, and show that
their *are* perceived differences in sound atributes even if their is no
difference in overall ratings. So in this case, you conclude that their are
audible differences.

If both the overall rating and the individual attribute ratings show no
difference at the 95% level, then you can conclude that in all liklihood
their is no difference due to the variable under test.


The variable under test has already been shown to be detectable in ABX -
that was stipulated in the test case. What that result shows is that
your preference test was insensitive to the difference that ABX testing
identified.


The point of this is to find a variable that does show up as a difference in
monadic preference amongst a large group of audiophiles (or perhaps
sub-seqment of same) in a test that is equally double-blind and which is
evaluative and ratings based, relaxed, music-focused,


You can easily do that by having Population A listen to music through
$20 computer speakers, and Population B listen through Watt/Puppy
speakers. No difference in kind from your example of a CD player
change. Now, we'll identify many differences in the preference test, and
we'll be able (though with some logistical difficulties) to do an ABX
test on A and B, and X will indeed be easily identifiable. Your exact
criterion of "a variable that does show up as a difference in
monadic preference amongst a large group of audiophiles" was met, and
ABX confirmed it. Ok, so now ABX is validated right? If not, then why?

See the problem Harry? You propose a very cumbersome test to try and
*find* some artifact that preference testing can identify that ABX can't
confirm. That's not validation, that's a fishing expedition. Since you
do not have any boundary limits for either test, you cannot test at or
near those boundaries, nor can you demonstrate that one test has
sufficient precision to challenge or validate the other. The speaker
example above is clearly a gross situation, but the concept is the same
since the boundaries are undefined. You can find any number of such
variables that your test confirms, and ABX then confirms, but you can
still postulate that there are "other" nuances or more subtle
differences that ABX would not be able to find. That will *always* be
the case until the boundaries are defined, meaning that you need never
accept any result as definitive.

and which doesn't
require a forced choice.


Another canard that is not proven, Harry. No matter what the scalar is,
or how it's phrased, placing a rating on "how the violins sound" for
example is a forced, evaluative choice. I've yet to seen any evidence to
suggest that "A or B" is a more cognitively disruptive choice than "how
do the violins sound?" which requires internal comparisons and choices
between the current sound and your own internal 'reference' for violins.
And clearly, if your scalars for "sound attributes" are of sufficient
detail and specificity to allow you to conclude A and B are different
(as you claim above), when the overall scoring is not significantly
different, then evaluation and comparison choices are unavoidable.

Then to subject the same variable to the standard abx tests for difference
and find out how well that test fares in spotting those same
differences


ABX can't spot "these same differences" because it only looks for
difference.

....how many people succeed, how many fail, how obvious does the
difference seem to show up. Only if ABX failed to detect the differences in
any appreciable way would ABX be judged a failure.


Difference, singular. A and B are distinguishable or they are not. It's
a binary result. I don't know how "...in any appreciable way..." could
even apply to ABX.

That's why I call it a
validation test. If ABX is doing its job and is as good at evaluating
musical reproduction differences as it is in spotting distortion artifacts
among trained listeners, then this test should be a piece of cake for
ABX


You're ignoring the test case altogether. Why? Why not answer the
question posed? In the test case described, there was already an ABX
verified difference. It was measurable, physically, and it did not show
up as a change in preference. In that case, the "reference standard" is
shown to be inappropriate. So, are you saying that the test case, as
presented (OK, let's modify it to include 300 individuals for your
test), could not happen?

.....especially as you keep insisting that the monadic test is less
sensitive than ABX.


For what it's designed for yes. And that is NOT for evaluating preference.

However, insisting that such a test not be done because it is not needed
("proved science") or somehow inappropriate is simply begging the question.


Strawman Harry, I said nothing about "proved science". And I did not
say that validation was inappropriate, I pointed out why I believe your
proposed test is inappropriate for the intended use. Not the same thing
at all.


I propose that we see. That's a validation test.


If, as in the test case described, there are some number of instances
where ABX does exactly what it was designed for (as I understand the
reasoning), and detects very subtle or just noticeable artifacts that
distinguish systems A and B, but those artifacts do not positively or
adversely affect the music played through those systems (above the
limits of perception), then your test *cannot* confirm those
differences, and would thus be inappropriate. As in the case of
speakers discussed above, your test would "validate" ABX quite handily,
but what would that prove?

I have done just that above.

Well, not really.

Keith Hughes


[ We are getting too close to personal attacks here. Please,
everyone in this thread, tone it down. Take a few breaths
before hitting Send and make sure you're arguing points
rather than people. -- dsr ]
  #83   Report Post  
Posted to rec.audio.high-end
John Stone John Stone is offline
external usenet poster
 
Posts: 117
Default You Tell 'Em, Arnie!

On 7/11/09 11:34 PM, in article , "Sonnova"
wrote:



Not sure what you mean by "quasi-balanced". All my interfaces are
se.


Quasi-balanced - Most premium interconnects are made this way. In a quasi
balanced coaxial interconnects, there are three conductors: The shield, and
two wires inside the shield. One of the wires is "hot", the other is "return"
and then there's the shield. The "hot" conductor is affixed to both RCA
connector tips, the return is affixed to both RCA connector barrels
(completing the circuit) and the shield is connected to the barrel on only
one end. It covers the whole cable from end to end but since it is connected
at only one end, it carries NO SIGNAL and is a shield only. Cheap
interconnects are only half shielded because the shield is also the return.
One can easily tell a Quasi-balanced interconnect because there is always
arrows on either the connectors, printed at intervals on the cable itself, or
on labels affixed to the cable. The arrows point AWAY from the end that has
the shield connected to the barrel of the RCA plug. Proper proceedure is to
use the pre-amp or the integrated amp/receiver as the common ground plane so
that all arrows on all cables point away from that component. Most people
wrongly interpret the arrow to mean signal direction and usually arrange
these cables with the arrow pointing in the direction of signal flow. I.E.
FROM the tuner or CD player TO the preamp, FROM the pre-amp TO the power amp,
etc. This is wrong the arrows should point away from the pre-amp/
integrated/receiver toward everything else. So there is one common reference
point for all of the shields. THis arrangement is best for the lowest noise
and prevents ground-loops. I also ground my preamp to a cold water pipe, but
this is optional, especially if your listening room has three prong mains
plugs where one is grounded.



The term "quasi balanced" is very misleading. There's nothing more balanced
about such a configuration than there is with a set of single conductor
shielded interconnects. The output of the preamp and input to the amp remain
in unbalanced configuration, and the ground, therefore, still carries the
signal return. The components see no real difference between this
configuration and a single conductor with shield.
I also can't see how this configuration can have any impact whatsoever on a
ground loop problem. Such a condition results from connecting the grounds of
two components that are sitting at two different potentials. And as I said,
electrically speaking, the interconnection is exactly the same; hot to hot,
and ground to ground.


They tell you exactly what cable they use so if you want to know
shield effectivity or capacitance/ft....you can.


Except that they're only HALF shielded. The center conductor is shielded but
the shield is also the "return" half of the circuit (so it appears from their
description) and carries a signal so the shield is not "shielded". If you're
happy with them, fine, but I wouldn't use them in my system for that reason.

I don't understand what you mean by "half shielded". Even if only connected
at one end, the shield and ground return leads will still be electrically in
parallel, so anything sitting on the shield line will also be there on the
ground line. What would be the purpose of shielding the ground side of a
shielded interconnect?

  #84   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default You Tell 'Em, Arnie!

Scott wrote:
On Jul 12, 9:58?am, ScottW2 wrote:

I'd like to know the exact CDPs were tested. ?Were they current
generation DACs or is this a test of obsolete DAC technology?

they were not current for sure. These tests were done in service of
the production of his 1994 remasters of the Mercury classical catalog.
This particular anomly was discovered when he was testing the physical
product from various manufacturers. He discovered that with certain
CDPs the CDs from some manufaturers were quite less than transparent
compared to the masters. This observation was later confirmed in a
number of other tests conducted by other parties.


Again, why not publish the details of the CDPs, the test setup,
the stats of the results...this sort of thing would be slam-dunks
for 'subjectivists'. Or was Drake simply unaware of teh appalling
lack of evidence from that side, even while he decided to conduct
such a test?



--
-S
We have it in our power to begin the world over again - Thomas Paine

  #85   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default You Tell 'Em, Arnie!

wrote in message
...
Harry Lavo wrote:
wrote in message
...
Harry Lavo wrote:
wrote in message
snip

snip
I pointed out to you several times that in fact tests of statistical
"difference" are routinely applied to the scalars in using such tests.
For "preference" yes. Showing that the results of group A are or are
not
statistically different is not at all the same as a *difference* (i.e.
a/b
with other variables - such as participants - held constant) test with
sufficient replicates to evaluate the response statistically.


How can you say that? I was proposing two monadic samples of 300 people
each. Much more statistical;u reliable than seventeen sample individual
tests.


IF the focus of the test is the same. Musical enjoyment is NOT the same
focus as evaluating a (usually) just detectable difference.

In fact the JAES has published peer-reviewed articles that show that
at seventeen samples, even the statistical sampling guidelines commonly
used
are in error...slightly in favor of a "null" result.


Again, I'm not questioning population size. But you can sample a
thousand people and the resulting statistics are worthless if the test
is insensitive to the parameter of interest.


And I'm talking about perceiving differences in audio reproduction equipment
when reproducing music, as evaluated using ABX. I am DIRECTLY measuring
real differences in the base sample....differences perceived statistically
as different between the variable under test and its control, while
reproducing music. How much more "on paramenter" can you get? It is "on
paramenter", it is just not measured directly (a good thing....see below).




Thus the base tests for validation were large samples with proven
statistical differences in total and across attributes, across a chosen
and agreed upon population of audiophiles. All the ABX test had to do
was to show a similar set of overall differences among a fairly large
sameple of people, using 17 point samples, to validate. That seemed
too
threatening, I guess.
Threatening, no. Uninformative, certainly. You want to use *as a
reference* a test that a) has not been validated for difference
distinction and b) that focuses on another parameter altogether, i.e.
preference, and c) presents numerous confounding variables (
"...across
attributes..." as you stated above).


This is a test technique used widely and statistical significance is just
that....significant. Your charges are pure fantasy.


OK, where was your monadic or proto-monadic test previously used for
discrimination of low level or just noticeable audible differences?
Show where that's been validated or is routinely used. Ever?


This is irrelevant...if their is a statistical difference in the monadic
test, it can either be at threshold or above threshold...but that is
irrelevant as the fact will be that it is perceived (again, the statistical
evaluation says so). It is then the ABX test's job to show that the same
difference is perceived under ABX conditions.



You can't have
statistically significant difference in preference without there being a
difference.


Very true. And *what* is that difference? In your test you cannot
unambiguously attribute the preference to the physical difference in the
A/B systems. Why? Because you're using an indicator
(enjoyment/appreciation/satisfaction) that is not directly linked to the
parameter you're trying to measure. An indicator, as I pointed out
previously, that is clearly influenced by many factors outside of the
physical systems being evaluated.



You are so WRONG here. Any psychological researcher will tell your that an
indirect measurement is the best way, as it eliminates any chance that
focusing on the variable directly distorts the validity of the measurement.
This is perhaps one of the potentially most damaging arguments against ABX,
btw...in other words, focusing on difference (when it comes to appraising
musical reproduction) can actually get in the way of hearing differences as
might be perceived in normal, non-critical listening. Two different states
of consciousness.



snip

Looke Keith. People are listening to MUSIC. THEY are asked to rate the
experience. THEY are asked to RATE the reproduced music on a "liking"
scale
(that's how they express their "preference".) They are also asked to
rate
specific musical attributes (the sound of the violins, for example, or
the
sound of the tympani, or the sound of the guitars, etc.)


OK, and...

When you have two large samples, exposed to the same music, and with only
one variable changed (lets say the CD player use) if you get
statistically
significant differences in the ratings you KNOW that it is the variable
creating it. The ratings are very similar to those use in ABC/hr, and is
one of the reason it is prefered to ABC...it measures quality
differences,
not just differences.


Well, no statistician will ever say you "KNOW" something based on
statistical difference. You know, the old correlation/causation thing?
But Harry, you cannot change only ONE thing in your test. You changed
the CD player, AND the sample population. You're assuming homogeneity
in the sample population relative to a rather esoteric parameter (i.e.
musical enjoyment) that may or may not be correct.


Keith, there is a whole science developed among researchers to guide the
selection of random samples that are matched. Your argument is a
non-starter.



There IS *no direct preference* expressed in monadic testing....


Yes, I got that Harry.

the only way
a preference shows up is because of statistical sampling of two large
bases
or respondents.


Yes, I got that too Harry.

That is WHY I proposed this very expensive and cumbersome
test as the "touchstone" demonstrating that a real difference in
preference
exists. Isn't that after all what we as audiophiles hope to achieve in
our
own testing? And as an added benefit, it is able to give an indication
of
in what area of musical reproduction that rating preference exists.

Just like two years ago, you just keep raising the same old cannards and
in
the process show you really don't stop to understand the technique I am
expousing. If you doubt it, consult a real experiemental psychologist or
statistician.


Gratuitous ad hominem comment noted. Thanks Harry, that's really
showing how *you* "were not...met by constructive dialog but rather
repeated attempts to ridicule and disparage...". Just like two years
ago, you are making the same basic argument - if I don't agree with you,
I'm just ignorant.


I have reason for the comment (see below) but I agree I should not have made
it. I apologize.



snip

*Your* conclusion: ABX has been shown to be inaccurate, and is thus
discarded. No other conclusion is possible if the monadic preference
test
is accepted as a suitable standard.


First, 100 is too small a sample.


It's an "example", not a protocol.

I posited 300, which is generally
accepted as large enough to show small rating differences if they exist
due
to the variable under test.
Second, the test in both tests is for positive difference. There is no
sense in a statistical "no difference". The applied statistics are
different, but the concept is the same.


"Positive" difference?


That's how researchers often refer to a statistically significant attribute,
since one leg of the test will rate higher than the other.


The worst that can happen is that there is "no difference" (signicant at
the
95% level) in overall rating, but there are differences in attribute
ratings
(again at the 95%+ level). However these are still valuable, and show
that
their *are* perceived differences in sound atributes even if their is no
difference in overall ratings. So in this case, you conclude that their
are
audible differences.

If both the overall rating and the individual attribute ratings show no
difference at the 95% level, then you can conclude that in all liklihood
their is no difference due to the variable under test.


The variable under test has already been shown to be detectable in ABX -
that was stipulated in the test case. What that result shows is that
your preference test was insensitive to the difference that ABX testing
identified.


Again, you show a lack of understanding of what I proposed. The first step
is to find an equipment variable that DOES expose a difference in monadic
appreciation....THEN undertak ABX testing to see if it delivers the same
result. Not the othe way around. Your failure to understand the difference
is one of the reasons I made the comment above. The other is your
insistence that a statistical difference in ratings is somehow not "on
parameter" to measuring differences.



The point of this is to find a variable that does show up as a difference
in
monadic preference amongst a large group of audiophiles (or perhaps
sub-seqment of same) in a test that is equally double-blind and which is
evaluative and ratings based, relaxed, music-focused,


You can easily do that by having Population A listen to music through
$20 computer speakers, and Population B listen through Watt/Puppy
speakers. No difference in kind from your example of a CD player
change. Now, we'll identify many differences in the preference test, and
we'll be able (though with some logistical difficulties) to do an ABX
test on A and B, and X will indeed be easily identifiable. Your exact
criterion of "a variable that does show up as a difference in
monadic preference amongst a large group of audiophiles" was met, and
ABX confirmed it. Ok, so now ABX is validated right? If not, then why?


I'm talking of the more subtle types of differences that audiophiles often
feel exist and abx'rs routinely deny exist except in their heads.


See the problem Harry? You propose a very cumbersome test to try and
*find* some artifact that preference testing can identify that ABX can't
confirm. That's not validation, that's a fishing expedition. Since you
do not have any boundary limits for either test, you cannot test at or
near those boundaries, nor can you demonstrate that one test has
sufficient precision to challenge or validate the other. The speaker
example above is clearly a gross situation, but the concept is the same
since the boundaries are undefined. You can find any number of such
variables that your test confirms, and ABX then confirms, but you can
still postulate that there are "other" nuances or more subtle
differences that ABX would not be able to find. That will *always* be
the case until the boundaries are defined, meaning that you need never
accept any result as definitive.


I just spoke above of the criteria, as I have in the past. I am looking to
find a difference on a variable that "objectivists" believe not to exist.
Only once and if we find it can it then serve as a basis for the validation.
You are the one setting up the strawman example.


and which doesn't
require a forced choice.


Another canard that is not proven, Harry. No matter what the scalar is,
or how it's phrased, placing a rating on "how the violins sound" for
example is a forced, evaluative choice. I've yet to seen any evidence to
suggest that "A or B" is a more cognitively disruptive choice than "how
do the violins sound?" which requires internal comparisons and choices
between the current sound and your own internal 'reference' for violins.
And clearly, if your scalars for "sound attributes" are of sufficient
detail and specificity to allow you to conclude A and B are different
(as you claim above), when the overall scoring is not significantly
different, then evaluation and comparison choices are unavoidable.


Let me reiterate....one is an after-the-fact holistic rating agains that
perceived reality....which is exactly how most audiophiles make judgements
about the quality of their system. The other is a forced choice "in real
time" between snippets of sound (I know, I know, but the reality is this
test requires to and fro'ing to make any kind of choice.in real
time....seventeen or more in succession, in fact. Find me a dozen
psychological researchers who will claim that a direct forced choice is the
same as a monadic rating on a subjective scale, and I will cede the point.
That dozen simply don't exist (at leas if they got "A"'s in their
course-work).


Then to subject the same variable to the standard abx tests for
difference
and find out how well that test fares in spotting those same
differences


ABX can't spot "these same differences" because it only looks for
difference.


Yes, but first it has to spot the difference. And then in follow up, people
taking the test ought to be able to give some indication of what they
thought the difference was. Again, because we are "validating" the use of
the test as a useful tool for home evaluation of audio gear, as is so often
the matra here.


....how many people succeed, how many fail, how obvious does the
difference seem to show up. Only if ABX failed to detect the differences
in
any appreciable way would ABX be judged a failure.


Difference, singular. A and B are distinguishable or they are not. It's
a binary result. I don't know how "...in any appreciable way..." could
even apply to ABX.


The outlier argument. If thirty people do the test, and one or two succeed
but others do not, is it significant or not. Or if no one person's choices
prove significant, but the overall sample when lumped together does. Small
sample difference testing is not as simple as it is often made out to be.


That's why I call it a
validation test. If ABX is doing its job and is as good at evaluating
musical reproduction differences as it is in spotting distortion
artifacts
among trained listeners, then this test should be a piece of cake for
ABX


You're ignoring the test case altogether. Why? Why not answer the
question posed? In the test case described, there was already an ABX
verified difference. It was measurable, physically, and it did not show
up as a change in preference. In that case, the "reference standard" is
shown to be inappropriate. So, are you saying that the test case, as
presented (OK, let's modify it to include 300 individuals for your
test), could not happen?


Because in your "test case" you've got it bass-ackward, as I've already
pointed out.


.....especially as you keep insisting that the monadic test is less
sensitive than ABX.


For what it's designed for yes. And that is NOT for evaluating
preference.


It is less sensitive for the purpose it is designed for? Can you restate or
explain what you mean, please?


However, insisting that such a test not be done because it is not needed
("proved science") or somehow inappropriate is simply begging the
question.


Strawman Harry, I said nothing about "proved science". And I did not
say that validation was inappropriate, I pointed out why I believe your
proposed test is inappropriate for the intended use. Not the same thing
at all.


No you didn't, but other supporters of the ABX test have, many times. I
wasn't just talking about you....I am sorry if I didn't make that clear.



I propose that we see. That's a validation test.


If, as in the test case described, there are some number of instances
where ABX does exactly what it was designed for (as I understand the
reasoning), and detects very subtle or just noticeable artifacts that
distinguish systems A and B, but those artifacts do not positively or
adversely affect the music played through those systems (above the
limits of perception), then your test *cannot* confirm those
differences, and would thus be inappropriate. As in the case of
speakers discussed above, your test would "validate" ABX quite handily,
but what would that prove?


First you use a "strawman" test variable. Second you have the validation
bass-ackward, as I have pointed out. Let's focus on differences that "do"
affect perception of the musical reproduction, although very subtly. THAT
is an appropriate test case....the validation is to show that ABX also
deteects those differences among a population of similarly-chosen
audiophiles, and does not instead create an artificial "null" difference, as
audiophiles often claim it does. When used to find diffeeencs in musical
reproduction, not in distortion artifact or frequency response or volume
differences.


I have done just that above.

Well, not really.


I think really.

Keith Hughes


[ We are getting too close to personal attacks here. Please,
everyone in this thread, tone it down. Take a few breaths
before hitting Send and make sure you're arguing points
rather than people. -- dsr ]




  #86   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Harry Lavo" wrote in message


It is interesting to note that my attempt to define how
such validation of ABX testing for evaluation of
differences in musical reproduction might be done, here
on RAHE a few years ago, the attempt wasn't met by
constructive dialog but rather repeated attempts to
ridicule and disparage (a) the idea of the validation
itself ("it wasn't needed...ABX was 'settled science' ")
and (b) the specific suggestions of test techniques and
sequences made by me (themselves used extensively in the
realm of food testing and psychological experimentation).


Harry, it is clear to many of the rest of us that there are many people in
the world who try to give their lives purpose by:

(1) Finding a situation that may or may not even exist and that only they
and perhaps a few other people even perceive to be a problem.

(2) Trying to promote some expensive and unwieldy method for purportedly
solving the purported "problem".

Good examples of such a thing would be the SACD and DVD-A formats that
followed this model quite exactly.

(1) Promoters of DVD-A and SACD alleged the existence of sound quality
problems with the Audio CD format that not even they could demonstrate by
conventional means other than the well-known and totally invalid methodology
of sighted or single blind evaluation.

(2) They spent actual millions if not 100's of millions of dollars invented
new recorders and players based on their new technology, and additional
equal or greater amounts of money recording and re-recording existing
recordings in the new format.

To this day there is no conventionally-obtained evidence that shows that the
new formats had any inherent audible benefits at all, the products never
were accepted in the mainstream, and many of the record company executives
that bet their careers on the new formats lost their jobs.

This despite the fact that the validation techniques I
was proposing were to some degree incorporated within
ABC/hr testing, considered even by the double-blind
enthusiasts as superior to ABX for evaluation of music.


This misstates the difference between ABX and ABC/hr testing. ABX is to this
day the best known generally used methodology for determining that audio
products even sound different. ABC/hr is a methodology for rating audio
products in terms of their degradation of the audio signal. Applying the
ABC/hr methodology to products that don't even sound different in ABX
testing is a waste of time.

I'm afraid I agree with Mr. Finsky, attempts at
constructive dialog on this subject go nowhere.


Actually they do in contexts where people are required to do more than
pontificate when they suggest that there problems with generally accepted
science as related to audio. One such place is called the Hydrogen Audio
Forum, and I heartily suggest Harry that you try to sell your ideas there.
Google is your friend!

  #87   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default You Tell 'Em, Arnie!

Scott wrote:

Nope. Straw man. ?And you should know better.


here are your words from this thread. "Audiophiles routinely claim
audible difference among classes of devices whose typical measured
performance does not predict audible difference -- CDPs and cables,
for example. (assuming level-matching for output devices, of course)."
You might want to check thse things before crying strawman. (note for
moderator: I am leaving all quotes in tact for sake of showing that
these were Steve's words in context)



What does the word 'typical' mean to you, Scott? Does it mean 'all'?

Please now and forever stop claiming the me, Arny, or any of the
other people you argue with about this over and over, claim that
*All* (X) sound *the same*. Thanks.


You might want to look up 'photoelectric effect', for example, before
you attempt such arguments, much less claim that 'physicists',
wholesale, had 'concluded they they had pretty much figured out everything
there was to figure out with Newtonian physics'.


So that adds up to "many" puzzling things? I think you are grasping at
straws here.



I think you need to review the history of 20th C physics. You're out
of your depth.





--
-S
We have it in our power to begin the world over again - Thomas Paine

  #88   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

On Jul 12, 3:55 pm, ScottW2 wrote:


Players as well as digital audio tools in mixing/mastering have
certainly advanced in 15 years but the caveat that "certain" CDPs and
CDs from "some manufacturers" doesn't sound like condemnation of the
format with what was available 1.5 decades ago.

ScottW


It wasn't meant as condemnation. It was a correction of Steve's
assertion that "Audiophiles routinely claim audible difference among
classes of devices whose typical measured performance does not predict
audible difference -- CDPs and cables, for example. (assuming level-
matching for output devices, of course).

  #89   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

On Jul 12, 3:54*pm, "Arny Krueger" wrote:
"Scott" wrote in message



they were not current for sure. These tests were done in
service of the production of his 1994 remasters of the
Mercury classical catalog. This particular anomly was
discovered when he was testing the physical product from
various manufacturers. He discovered that with certain
CDPs the CDs from some manufaturers were quite less than
transparent compared to the masters. This observation was
later confirmed in a number of other tests conducted by
other parties.


There is an extant AES paper that was presented by Dennis Drake in 1992
presented in its entirety at this URL:

http://www.themusiclab.net/aespaper.pdf

It says:

"As Mrs. Cozart Fine and I began our evaluation sessions in April 1989, it
became
very clear to us that the A/D conversion process was a very critical step in
our production
work. As the producer once described it, the sounds from different
converters were
all different "bowls of soup". We began auditioning every A/D converter that
we could
obtain. Our test methodology was simple: while playing an original master as
source, we
would switch between the direct output of our console and the output of the
digital
chain. The digital chain consisted of the converter under test feeding a
Sony 1630 PCM
Processor. The final link in the chain was the Apogee filter modified D/A
section of the
Sony 1630. At times, we would substitute different D/A converters for
listening evaluations,
but we always returned to the Sony converters or the D/A's of our Panasonic
3500
DAT machine for reference purposes.

"Our monitoring set-up consisted of a Cello Audio Suite feeding balanced
lines to
Cello Performance Amplifiers, which in turn were driving B & W 808 Monitor
Loudspeakers. As we compared the various digital converters to the playback
of the
actual analog source, we found that the soundstage of the orchestra was
always
reduced in width when listening to the digital chain. We also found that
many A/D converters
exhibited a strident string sound, unnatural sounding midrange, and a loss
of air
or ambience around the instruments"

This above formal presentation of the relevant so-called Denni Drake tests
includes many details that are different from what we have seen presented on
RAHE. For one thing, the evaluation was not of CD players, and for another,
there is no evidence of level matching, time synching, or bias controls.


You skipped the relevant part of the paper. Jeez.

"Upon further investigation, it turned out that the plant had three
different laser
beam recorders and that one of them sounded different than the other
two. After making
a glass master of the “Balalaika Favorites” on all three LBR’s and
comparing the subsequent
CD test discs from each, we were definitely able to identify the
“thinner sounding”
lathe. From the information given to us by the plant engineers,
apparently this lathe was
configured with different front end electronics."

In an exchange of emails Dennis told me that this particular sonic
defect was CDP dependent. It was in those emails that he gave details
of level matching, time synching and DB protocols.

  #90   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

On Jul 12, 8:04*pm, Steven Sullivan wrote:

Again, why not publish the details of the CDPs, the test setup,
the stats of the results...this sort of thing would be slam-dunks
for 'subjectivists'. *Or was Drake simply unaware of teh appalling
lack of evidence from that side, even while he decided to conduct
such a test?



Why don't you ask Dennis Drake? He was very kind in discussing these
things with me via email. Slam dunks? It's a hobby not a basketball
game.



  #91   Report Post  
Posted to rec.audio.high-end
Sonnova Sonnova is offline
external usenet poster
 
Posts: 1,337
Default You Tell 'Em, Arnie!

On Sun, 12 Jul 2009 20:03:57 -0700, John Stone wrote
(in article ):

On 7/11/09 11:34 PM, in article , "Sonnova"
wrote:



Not sure what you mean by "quasi-balanced". All my interfaces are
se.


Quasi-balanced - Most premium interconnects are made this way. In a quasi
balanced coaxial interconnects, there are three conductors: The shield, and
two wires inside the shield. One of the wires is "hot", the other is
"return"
and then there's the shield. The "hot" conductor is affixed to both RCA
connector tips, the return is affixed to both RCA connector barrels
(completing the circuit) and the shield is connected to the barrel on only
one end. It covers the whole cable from end to end but since it is connected
at only one end, it carries NO SIGNAL and is a shield only. Cheap
interconnects are only half shielded because the shield is also the return.
One can easily tell a Quasi-balanced interconnect because there is always
arrows on either the connectors, printed at intervals on the cable itself,
or
on labels affixed to the cable. The arrows point AWAY from the end that has
the shield connected to the barrel of the RCA plug. Proper proceedure is to
use the pre-amp or the integrated amp/receiver as the common ground plane so
that all arrows on all cables point away from that component. Most people
wrongly interpret the arrow to mean signal direction and usually arrange
these cables with the arrow pointing in the direction of signal flow. I.E.
FROM the tuner or CD player TO the preamp, FROM the pre-amp TO the power
amp,
etc. This is wrong the arrows should point away from the pre-amp/
integrated/receiver toward everything else. So there is one common reference
point for all of the shields. THis arrangement is best for the lowest noise
and prevents ground-loops. I also ground my preamp to a cold water pipe, but
this is optional, especially if your listening room has three prong mains
plugs where one is grounded.



The term "quasi balanced" is very misleading. There's nothing more balanced
about such a configuration than there is with a set of single conductor
shielded interconnects.


Well of course not. The term comes from the obvious association with a real
balanced interconnect where the hot and the return and the shield are all
separate circuits. "Quasi" refers, just as obviously to the dictionary
definition of the word meaning "apparently but not really". It means what it
says: Not really balanced. But nonetheless, in a "quasi-balanced" audio
interconnect the shield carries no signal and is just a shield. Sort of like
extending the chassis out to the peripheral components.
  #92   Report Post  
Posted to rec.audio.high-end
Ear Plug Ear Plug is offline
external usenet poster
 
Posts: 1
Default You Tell 'Em, Arnie!

On 10 jul, 06:45, Ed Seedhouse wrote:
On Jul 9, 8:06*pm, Scott wrote:



On Jul 9, 11:59 am, ScottW2 wrote:
Well then I expect soon we will read a newspaper story about how the
JREF foundation has given you a million dollars for proving that you
can hear such differences under blind conditions. *Such a test should
be trivial for you to pass and surely you would not turn down an easy
million dollars?


That's an article that will never be written. JREF are basically
running a shell game with their so called challenge. Any real
demonstration of cables having different sound will ultimately be
disqualified since the cause of such a difference will be within the
laws of physics.


As it should be as most exotic cable manufacturers make claims of
magical properties outside the laws of physics.


the question isn't claims by manufacturers.


It is to me and it quite obviously is to the JREF challenge.


If so then why are they bothering reviewers? Why not make the
challenge to the cable manufacturers. maybe because it is silly to
challenge advertsing copy which is abundant in hyperbole and vague
assertions that are pretty much unchallengable? I guess the real
question is why on earth would you concern yourself over ad copy in a
world where it is silly to take any advertisement at face value. Ads
are sales pitches not documentaries.


A little google searching comes up withhttp://www.randi.org/jr/2007-09/092807reply.html
which shows you what he actually said, and that he actually challenged
Pear Audio's claims directly. *You'll need to scroll down the page to
the headline: MORE CABLE NONSENSE.


One advice to all "cable experts" on this forum.
The best way, to listen you the music is;
OF COURSE YOU MUST USE ORDINARY ZIP or LAMP-CORD...(in combination of
ear plugs)

To be feel good, and thinking you are right, read a lot of CABLE
NONSENSE.

BYE EVERYONE,
Have a good night, sleep well and enjoy the sound of your system.


  #93   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Harry Lavo" wrote in message


And I'm talking about perceiving differences in audio
reproduction equipment when reproducing music, as
evaluated using ABX.


ABX is known to work very well.

Where's the beef?

This is irrelevant...if their is a statistical difference
in the monadic test, it can either be at threshold or
above threshold...but that is irrelevant as the fact will
be that it is perceived (again, the statistical
evaluation says so). It is then the ABX test's job to
show that the same difference is perceived under ABX
conditions.


In numerous circumstances, audible differences that had been ascertained by
other scientific methods have been confirmed by ABX tests. I know of no
example where ABX failed. The only area of controversy with ABX and other
widely used scientific testing methods relates to audiophiles and audiophile
merchandisers who repeatedly fail to confirm the results they find with
sighted evaluations. This seems to be very easy to explain without impugning
ABX or any of the other scientific testing methodologies.



  #94   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

On Jul 13, 4:15Â*am, Steven Sullivan wrote:
Scott wrote:

Nope. Straw man. ?And you should know better.

here are your words from this thread. "Audiophiles routinely claim
audible difference among classes of devices whose typical measured
performance does not predict audible difference -- CDPs and cables,
for example. (assuming level-matching for output devices, of course)."
You might want to check thse things before crying strawman. (note for
moderator: I am leaving all quotes in tact for sake of showing that
these were Steve's words in context)


What does the word 'typical' mean to you, Scott? Does it mean 'all'?




Main Entry:typ·i·cal
Pronunciation:\ˈti-pi-kəl\
Function:adjective
Etymology:Late Latin typicalis, from typicus, from Greek typikos, from
typos model €” more at type
Date:1609
1: constituting or having the nature of a type : symbolic
2 a: combining or exhibiting the essential characteristics of a group
typical suburban houses b: conforming to a type a specimen typical
of the species



Please now and forever stop claiming the me, Arny, or any of the
other people you argue with about this over and over, claim that
*All* (X) sound *the same*. Thanks.



How can I stop something I am not doing? What does the word
"standard"
mean to you Steve? Is it something radically different than typical?
After all this is what I said;
"You seem to have been claiming that standard meausurements predict
that all CDPs sound the same"
Your words once again....
"Audiophiles routinely claim audible difference among classes of
devices whose typical measured performance does not predict audible
difference -- CDPs and cables, for example. (assuming level-matching
for output devices, of course). "
So what are you saying now Steve that you were not suggesting that
audiophiles were and always have been wrong in their reports about
audible differences between CDPs? Sure looks like that was what you
were saying. And when I pointed out that this wasn't always the case
and has been demonstrated with DBTs no less you went into a tail spin
begging for details and claiming this would be a "slam dunk" for
subjectivists were it true. So you weren't arguing that CDPs all sound
the same despite audiophile anecdotes? What was your point then? That
audiophiles routinely report differences that are not predicted by
"typical" measured performance and sometimes they are right?! Fine, if
that is your point I agree with you.



You might want to look up 'photoelectric effect', for example, before
you attempt such arguments, much less claim that 'physicists',
wholesale, had 'concluded they they had pretty much figured out everything
there was to figure out with Newtonian physics'.

So that adds up to "many" puzzling things? I think you are grasping at
straws here.


I think you need to review the history of 20th C physics. You're out
of your depth.



I think you do as well. So what? What does are mutual disresepct for
the other's off hand knowledge on the history of quantum physics have
to do with my point? the point which for some reason you decided to
snip. Here I'll repeat it so we can try to stay on topic.Many things
derived from quantum physics would have seemed like magic 150 years or
so ago
and would have actually met the Randi challenge. Do you disagree?

  #95   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Scott" wrote in message


I said;
"You seem to have been claiming that standard
measurements predict that all CDPs sound the same"


There are goodly number of CD players, whether by design or due to partial
failure, produce signals that are so degraded that they will even sound
different.

"Audiophiles routinely claim audible difference among
classes of devices whose typical measured performance
does not predict audible difference -- CDPs and cables,
for example. (assuming level-matching for output devices,
of course). "


Agreed.

Furthermore, audiophiles routinely claim audible superiority for equipment
that has audible faults, some of which even they admit that they hear.

So what are you saying now Steve that you were not
suggesting that audiophiles were and always have been
wrong in their reports about audible differences between
CDPs?


Sometimes they are right, and sometimes they are wrong. They have been found
wrong when their claims are checked out by scientific means, whether test
equipment or well-run listening tests. Their comments are so frequently
inconclusive because of the grotesquely flawed means that they generally use
in their evaluations.





  #96   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

On Jul 13, 11:32*am, ScottW2 wrote:
On Jul 12, 8:04*pm, Steven Sullivan wrote:





Scott wrote:
On Jul 12, 9:58?am, ScottW2 wrote:


I'd like to know the exact CDPs were tested. ?Were they current
generation DACs or is this a test of obsolete DAC technology?


*they were not current for sure. These tests were done in service of
the production of his 1994 remasters of the Mercury classical catalog.
This particular anomly was discovered when he was testing the physical
product from various manufacturers. He discovered that with certain
CDPs the CDs from some manufaturers were quite less than transparent
compared to the masters. This observation was later confirmed in a
number of other tests conducted by other parties.


Again, why not publish the details of the CDPs, the test setup,
the stats of the results...this sort of thing would be slam-dunks
for 'subjectivists'. *Or was Drake simply unaware of teh appalling
lack of evidence from that side, even while he decided to conduct
such a test?


*Much ado about nothing but a defective laser front end as far as I
can tell.



How can you tell there is a defect? what specifically was defective
with what device?

  #97   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Scott" wrote in message

On Jul 12, 3:54 pm, "Arny Krueger"
wrote:
"Scott" wrote in message



they were not current for sure. These tests were done in
service of the production of his 1994 remasters of the
Mercury classical catalog. This particular anomly was
discovered when he was testing the physical product from
various manufacturers. He discovered that with certain
CDPs the CDs from some manufaturers were quite less than
transparent compared to the masters. This observation
was later confirmed in a number of other tests
conducted by other parties.


There is an extant AES paper that was presented by
Dennis Drake in 1992 presented in its entirety at this
URL:

http://www.themusiclab.net/aespaper.pdf

It says:

"As Mrs. Cozart Fine and I began our evaluation sessions
in April 1989, it became
very clear to us that the A/D conversion process was a
very critical step in our production
work. As the producer once described it, the sounds from
different converters were
all different "bowls of soup". We began auditioning
every A/D converter that we could
obtain. Our test methodology was simple: while playing
an original master as source, we
would switch between the direct output of our console
and the output of the digital
chain. The digital chain consisted of the converter
under test feeding a Sony 1630 PCM
Processor. The final link in the chain was the Apogee
filter modified D/A section of the
Sony 1630. At times, we would substitute different D/A
converters for listening evaluations,
but we always returned to the Sony converters or the
D/A's of our Panasonic 3500
DAT machine for reference purposes.

"Our monitoring set-up consisted of a Cello Audio Suite
feeding balanced lines to
Cello Performance Amplifiers, which in turn were driving
B & W 808 Monitor Loudspeakers. As we compared the
various digital converters to the playback of the
actual analog source, we found that the soundstage of
the orchestra was always
reduced in width when listening to the digital chain. We
also found that many A/D converters
exhibited a strident string sound, unnatural sounding
midrange, and a loss of air
or ambience around the instruments"


This above formal presentation of the relevant so-called
Denni Drake tests includes many details that are
different from what we have seen presented on RAHE. For
one thing, the evaluation was not of CD players, and for
another, there is no evidence of level matching, time
synching, or bias controls.


You skipped the relevant part of the paper. Jeez.


I still see no such thing.

"Upon further investigation, it turned out that the plant
had three different laser
beam recorders and that one of them sounded different
than the other two. After making
a glass master of the “Balalaika Favorites” on all three
LBR’s and comparing the subsequent
CD test discs from each, we were definitely able to
identify the “thinner sounding”
lathe. From the information given to us by the plant
engineers, apparently this lathe was
configured with different front end electronics."


Is there a reason why any relevant references to double blind testing seem
to be missing from your quote, Scott?

I'm not talking about hearsay or anecdotes, I'm talking about a primary
source.

  #98   Report Post  
Posted to rec.audio.high-end
Norman Schwartz Norman Schwartz is offline
external usenet poster
 
Posts: 28
Default You Tell 'Em, Arnie!

On Jul 13, 10:08*am, "Arny Krueger" wrote:
"Harry Lavo" wrote in message



And I'm talking about perceiving differences in audio
reproduction equipment when reproducing music, as
evaluated using ABX.


ABX is known to work very well.

Where's the beef?


Some listeners, including myself, feel that a period of longer term
listening (at least several hours) is required to reveal itself. E.g.,
could it possibly be that certain distortion characteristics are not
apparent nor find oportunity to 'grate' during instantaneous type
comparisons?
  #99   Report Post  
Posted to rec.audio.high-end
Sonnova Sonnova is offline
external usenet poster
 
Posts: 1,337
Default You Tell 'Em, Arnie!

On Mon, 13 Jul 2009 04:15:28 -0700, Arny Krueger wrote
(in article ):

"Harry Lavo" wrote in message


It is interesting to note that my attempt to define how
such validation of ABX testing for evaluation of
differences in musical reproduction might be done, here
on RAHE a few years ago, the attempt wasn't met by
constructive dialog but rather repeated attempts to
ridicule and disparage (a) the idea of the validation
itself ("it wasn't needed...ABX was 'settled science' ")
and (b) the specific suggestions of test techniques and
sequences made by me (themselves used extensively in the
realm of food testing and psychological experimentation).


Harry, it is clear to many of the rest of us that there are many people in
the world who try to give their lives purpose by:

(1) Finding a situation that may or may not even exist and that only they
and perhaps a few other people even perceive to be a problem.

(2) Trying to promote some expensive and unwieldy method for purportedly
solving the purported "problem".

Good examples of such a thing would be the SACD and DVD-A formats that
followed this model quite exactly.

(1) Promoters of DVD-A and SACD alleged the existence of sound quality
problems with the Audio CD format that not even they could demonstrate by
conventional means other than the well-known and totally invalid methodology
of sighted or single blind evaluation.


Have you ever done a DBT test between a RedBook CD of a particular title and,
say, a high-resolution download (24/96 or 24/192) of the same title? I have.
They're different. And the differences aren't subtle. The high-resolution
download wins over the CD every time (so far).

(2) They spent actual millions if not 100's of millions of dollars invented
new recorders and players based on their new technology, and additional
equal or greater amounts of money recording and re-recording existing
recordings in the new format.

To this day there is no conventionally-obtained evidence that shows that the
new formats had any inherent audible benefits at all, the products never
were accepted in the mainstream, and many of the record company executives
that bet their careers on the new formats lost their jobs.


That's simply not true, Arny. High resolution recordings in either PCM or DSD
sound significantly better than RedBook CD and carefully set-up DBT testing
has demonstrated that to my satisfaction (Levels matched as closely as
instrumentation will allow, time sync'd between, for instance, two identical
players one playing the SACD layer and the other playing the RedBook layer
or, one of my own recordings is played back from my master which level
matched and sync'd to a CD burned from that master using Logic Studio or
Cubase 4).
  #100   Report Post  
Posted to rec.audio.high-end
vlad vlad is offline
external usenet poster
 
Posts: 131
Default You Tell 'Em, Arnie!

Harry,

first of all, ABX does not need validation, it is a tool and you use
it when you need it. When I need to put nail in a wall I use hummer
without worrying if it is validated for this task or not.

Your 'monadic' test has one huge flow - it brings a lot more
variables into the test that are not under your control but they
definitely influence outcome of the test.

For instance, if the test takes more then one day then temperature
and humidity of the air will definitely affect physical abilities and
mood of your subjects and become parameters of the test too. Even the
weather itself becomes a parameter. In sunny weather people react to
the music differently then in a cloudy weather.The quality of the food
from day to day (you know, some of them can have problems with
indigestion) can become a real issue too. I am sure there are many
other things that are not under your control.

Another thing is that you have no control over random guesses vs.
real recognition. But I better not to open this can of warms :-)

my $.02 worth

vlad


  #101   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

On Jul 13, 4:20 pm, ScottW2 wrote:
On Jul 13, 4:16 am, Scott wrote:


In an exchange of emails Dennis told me that this particular sonic
defect was CDP dependent. It was in those emails that he gave details
of level matching, time synching and DB protocols.


This sounds like a test of CDPs ability to handle defective CDs with
high read error rates.


Again with the defects assertion. What was defective? How were the CDs
defective? An error in the pressing? How does that happen? How does
this play "better" on one player and not another? Dennis indicated
that on some CDPs the so called 'defective" discs played perfectly.
How can a defective disc ever play perfectly?

  #102   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

On Jul 13, 4:19*pm, "Arny Krueger" wrote:
"Scott" wrote in message



You skipped the relevant part of the paper. Jeez.


I still see no such thing.


And yet there it is right here below. After making a glass master of
the “Balalaika Favorites” on all three LBR’s and comparing the
subsequent CD test discs from each, we were definitely able to
identify the “thinner sounding”


"Upon further investigation, it turned out that the plant
had three different laser
beam recorders and that one of them sounded different
than the other two. After making
a glass master of the “Balalaika Favorites” on all three
LBR’s and comparing the subsequent
CD test discs from each, we were definitely able to
identify the “thinner sounding”
lathe. From the information given to us by the plant
engineers, apparently this lathe was
configured with different front end electronics."


Is there a reason why any relevant references to double blind testing seem
to be missing from your quote, Scott?


Obviously Dennis didn't include them.



I'm not talking about hearsay or anecdotes, I'm talking about a primary
source


That would Dennis Drake. The person I got the information from. If you
don't believe me feel free to contact Dennis. Heck I may have
completely misunderstood him. Go straigh to the source if you don't
believe me. He's a very nice guy. I'm sure he will answer any of your
questions just as he did for me provided you are polite. If I am
mistaken then I am mistaken.


  #103   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default You Tell 'Em, Arnie!

Norman Schwartz wrote:
On Jul 13, 10:08?am, "Arny Krueger" wrote:
"Harry Lavo" wrote in message



And I'm talking about perceiving differences in audio
reproduction equipment when reproducing music, as
evaluated using ABX.


ABX is known to work very well.

Where's the beef?


Some listeners, including myself, feel that a period of longer term
listening (at least several hours) is required to reveal itself. E.g.,
could it possibly be that certain distortion characteristics are not
apparent nor find oportunity to 'grate' during instantaneous type
comparisons?


THis must have been noted dozens of times by now in the history of RAHE, but:

ABX does not preclude longer-term listening. The sounds being compared can last as long as you
like (though there are good reasons to favor short samples).

It's the switching itself that should be made 'instantaneous' if possible...the interval
of dead air between A and B (and X).






--
-S
We have it in our power to begin the world over again - Thomas Paine

  #104   Report Post  
Posted to rec.audio.high-end
Sonnova Sonnova is offline
external usenet poster
 
Posts: 1,337
Default You Tell 'Em, Arnie!

On Mon, 13 Jul 2009 16:19:57 -0700, Norman Schwartz wrote
(in article ):

On Jul 13, 10:08*am, "Arny Krueger" wrote:
"Harry Lavo" wrote in message



And I'm talking about perceiving differences in audio
reproduction equipment when reproducing music, as
evaluated using ABX.


ABX is known to work very well.

Where's the beef?


Some listeners, including myself, feel that a period of longer term
listening (at least several hours) is required to reveal itself. E.g.,
could it possibly be that certain distortion characteristics are not
apparent nor find oportunity to 'grate' during instantaneous type
comparisons?


I think this is flawed thinking. If there is a difference between the sound
of two components, ABX or any suitable DBT will illuminate the differences on
direct comparison, immediatly. Any differences that require long term
listening to uncover, either are too miniscule to make any substantive
difference in any listening experience or they are imaginary. That's been my
experience.

  #105   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

On Jul 13, 4:20*pm, ScottW2 wrote:
On Jul 13, 2:56*pm, Scott wrote:





On Jul 13, 11:32*am, ScottW2 wrote:


On Jul 12, 8:04*pm, Steven Sullivan wrote:


Scott wrote:
On Jul 12, 9:58?am, ScottW2 wrote:


I'd like to know the exact CDPs were tested. ?Were they current
generation DACs or is this a test of obsolete DAC technology?


*they were not current for sure. These tests were done in service of
the production of his 1994 remasters of the Mercury classical catalog.
This particular anomly was discovered when he was testing the physical
product from various manufacturers. He discovered that with certain
CDPs the CDs from some manufaturers were quite less than transparent
compared to the masters. This observation was later confirmed in a
number of other tests conducted by other parties.


Again, why not publish the details of the CDPs, the test setup,
the stats of the results...this sort of thing would be slam-dunks
for 'subjectivists'. *Or was Drake simply unaware of teh appalling
lack of evidence from that side, even while he decided to conduct
such a test?


*Much ado about nothing but a defective laser front end as far as I
can tell.


How can you tell there is a defect?


*"Upon further investigation, it turned out that the plant had three
different laser beam recorders and that one of them sounded different
than the other
two. "

I think it's reasonable to conclude that one of them was defective in
some manner and the probable cause of sound difference was excessive
read errors or perhaps all players had excessive read errors, but some
had better error correction than others. * I'm still trying to
understand what this experience has to do with CD audio recording or
reproduction sound.

what specifically was defective
with what device?


*I don't know, do you?

I have no idea either. But I didn't make the assertion, you did. I
figured you might have a reason. I figured you might have known what
was defective when you claimed it was a simple case of a defect as far
as you could tell.



  #106   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

On Jul 13, 11:32*am, "Arny Krueger" wrote:
"Scott" wrote in message



I said;
"You seem to have been claiming that standard
measurements predict that all CDPs sound the same"


There are goodly number of CD players, whether by design or due to partial
failure, produce signals that are so degraded that they will even sound
different.


So they don't all sound the same. No argument there. I have heard
differences. Heck it was the common claim that there were no
differences that lead me to buy an inferior product the first time
out. Oh well. Lesson learned. Don't pay attention to nonsense like
"Audiophiles routinely claim audible difference among classes of
devices whose typical measured performance does not predict audible
difference -- CDPs and cables, for example. (assuming level-matching
for output devices, of course). " Clearly alleged typical meausred
performance" doesn't tell us jack about any given product's actual
sound.




"Audiophiles routinely claim audible difference among
classes of devices whose typical measured performance
does not predict audible difference -- CDPs and cables,
for example. (assuming level-matching for output devices,
of course). "


Agreed.



OK........




Furthermore, audiophiles routinely claim audible superiority for equipment
that has audible faults, some of which even they admit that they hear.


One person's 'fault" is another person's virtue. Depends on your
aesthetic priorities, goals and references. Ultimately that which is
"superior" is entirely subjective when talking about the aesthetic
values of our human perceptions.



So what are you saying now Steve that you were not
suggesting that audiophiles were and always have been
wrong in their reports about audible differences between
CDPs?


Sometimes they are right, and sometimes they are wrong.



No argument there.



They have been found
wrong when their claims are checked out by scientific means, whether test
equipment or well-run listening tests.



"Scientific menas?" If so then ceetainly you can cite the peer
reviewed published data. I mean if you are talking about legitimate
science this time and not just waving the science flag with no
substance in support.


Their comments are so frequently
inconclusive because of the grotesquely flawed means that they generally use
in their evaluations.


In your unsupported opinion.

  #107   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default You Tell 'Em, Arnie!

"vlad" wrote in message
...
Harry,

first of all, ABX does not need validation, it is a tool and you use
it when you need it. When I need to put nail in a wall I use hummer
without worrying if it is validated for this task or not.

Your 'monadic' test has one huge flow - it brings a lot more
variables into the test that are not under your control but they
definitely influence outcome of the test.

For instance, if the test takes more then one day then temperature
and humidity of the air will definitely affect physical abilities and
mood of your subjects and become parameters of the test too. Even the
weather itself becomes a parameter. In sunny weather people react to
the music differently then in a cloudy weather.The quality of the food
from day to day (you know, some of them can have problems with
indigestion) can become a real issue too. I am sure there are many
other things that are not under your control.

Another thing is that you have no control over random guesses vs.
real recognition. But I better not to open this can of warms :-)

my $.02 worth

vlad


I appreciate your concern, Vlad, but it is misplaced.

Research designers try to anticipate and take into account significant
possible intervening variables. In such a monadic test, no doubt the
variable would be changed from one session to the next so that any point in
time, the sampling would be roughly 50-50. Musical segments would be
rotated within samples so there is no order bias, etc. etc.

When you have a large sameple size and randomly chosen and matched samples,
you don't worry about a few random guesses. The fact is, their is a very
well developed set of statistical operations that take into account the
"degree" of difference between the ratings of the two samples. A different
standard applies depending on the number of scaler points used, whether the
scalars are symetrical or not, etc etc. And the significance level is
determined by sample size and the shape of the distribution curves as it
determines standard deviation and standard error.

And if you really want to worry about random guesses, worry about an ABX
test where a change in one sample can determine whether or not the test is
judged signicant, and there are NO controls against random guessing to
create a virtually guaranteed "null" effect.



  #108   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default You Tell 'Em, Arnie!

"Steven Sullivan" wrote in message
...
Norman Schwartz wrote:
On Jul 13, 10:08?am, "Arny Krueger" wrote:
"Harry Lavo" wrote in message



And I'm talking about perceiving differences in audio
reproduction equipment when reproducing music, as
evaluated using ABX.

ABX is known to work very well.

Where's the beef?


Some listeners, including myself, feel that a period of longer term
listening (at least several hours) is required to reveal itself. E.g.,
could it possibly be that certain distortion characteristics are not
apparent nor find oportunity to 'grate' during instantaneous type
comparisons?


THis must have been noted dozens of times by now in the history of RAHE,
but:

ABX does not preclude longer-term listening. The sounds being compared can
last as long as you
like (though there are good reasons to favor short samples).

It's the switching itself that should be made 'instantaneous' if
possible...the interval
of dead air between A and B (and X).


The need to make a forced choice, and to do seventeen trials, almost
universally leads to short snippets. It may be human nature, but it is a
real effect.

  #109   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default You Tell 'Em, Arnie!

On Jul 13, 7:19*pm, Norman Schwartz wrote:

Some listeners, including myself, feel that a period of longer term
listening (at least several hours) is required to reveal itself. E.g.,
could it possibly be that certain distortion characteristics are not
apparent nor find oportunity to 'grate' during instantaneous type
comparisons?


Anything is possible, However, all of the evidence we have says that
just the opposite is true—that any distortion becomes harder to detect
as the time interval between hearing the distorted and undistorted
signals increases. "Longer-term listening" will decrease your
sensitivity to differences, not increase it.

bob
  #110   Report Post  
Posted to rec.audio.high-end
Sonnova Sonnova is offline
external usenet poster
 
Posts: 1,337
Default You Tell 'Em, Arnie!

On Mon, 13 Jul 2009 19:34:17 -0700, Scott wrote
(in article ):

On Jul 13, 11:32*am, "Arny Krueger" wrote:
"Scott" wrote in message



I said;
"You seem to have been claiming that standard
measurements predict that all CDPs sound the same"


There are goodly number of CD players, whether by design or due to partial
failure, produce signals that are so degraded that they will even sound
different.


So they don't all sound the same. No argument there. I have heard
differences. Heck it was the common claim that there were no
differences that lead me to buy an inferior product the first time
out. Oh well. Lesson learned. Don't pay attention to nonsense like
"Audiophiles routinely claim audible difference among classes of
devices whose typical measured performance does not predict audible
difference -- CDPs and cables, for example. (assuming level-matching
for output devices, of course). " Clearly alleged typical meausred
performance" doesn't tell us jack about any given product's actual
sound.


True for active devices like CDPs, false for passive conductors like
Interconnects and cables. There is simply NO way a properly made cable or
interconnect can have a "sound". If it does its because the manufacturer
PURPOSELY added components to those cables to alter their frequency response
and that sound is subtracting fidelity from the music being played, not
adding fidelity to it. I.E. If a cable or interconnect changes the sound of
one's system, it is NOT in a good way. At any rate who wants to spend
hundreds of dollars for a set of "fixed" tone controls?



  #111   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Norman Schwartz" wrote in message

On Jul 13, 10:08 am, "Arny Krueger"
wrote:
"Harry Lavo" wrote in message



And I'm talking about perceiving differences in audio
reproduction equipment when reproducing music, as
evaluated using ABX.


ABX is known to work very well.

Where's the beef?


Some listeners, including myself, feel that a period of
longer term listening (at least several hours) is
required to reveal itself.


Not a problem with ABX.

E.g., could it possibly be
that certain distortion characteristics are not apparent
nor find oportunity to 'grate' during instantaneous type
comparisons?


I know and agree with exactly what you seem to be saying. That's one reason
why there is no inherent time limit in ABX testing, or the listener training
and recording selection leading up to it. It's one reason why ABX
Comparators are designed to be self-administered.


  #112   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Scott" wrote in message

On Jul 13, 11:32 am, "Arny Krueger"
wrote:
"Scott" wrote in message



I said;
"You seem to have been claiming that standard
measurements predict that all CDPs sound the same"


There are goodly number of CD players, whether by design
or due to partial failure, produce signals that are so
degraded that they will even sound different.


So they don't all sound the same.


Right, the defective ones either sound different or are so defective that
they don't make a signal at all.

No argument there.


Well, its common sense that broken things don't work right, and working
right for a CD player means sounding exactly like every other CD player that
is working right, all other things being equal which they frequently aren't.

I have heard differences.


Without more reliable data, that means nothing.

Heck it was the common claim that
there were no differences that lead me to buy an inferior
product the first time out.


If the product was actually inferior...

Oh well. Lesson learned.


I'm unsure of that.

Don't pay attention to nonsense like "Audiophiles
routinely claim audible difference among classes of
devices whose typical measured performance does not
predict audible difference -- CDPs and cables, for
example. (assuming level-matching for output devices, of
course). " Clearly alleged typical meausred performance"
doesn't tell us jack about any given product's actual
sound.


I don't see any reliable evidence that supports any of those conclusions.

"Audiophiles routinely claim audible difference among
classes of devices whose typical measured performance
does not predict audible difference -- CDPs and cables,
for example. (assuming level-matching for output
devices, of course). "


Agreed.


OK........


But, the reasons are generally trivial.

Furthermore, audiophiles routinely claim audible
superiority for equipment that has audible faults, some
of which even they admit that they hear.


One person's 'fault" is another person's virtue.


However, there is something like 99% agreement about certain old-technology
audible colorations and distortions being faults.

Depends
on your aesthetic priorities, goals and references.


.....or like totally tasteless, garish cheap paintings of nudes or Elvis on
velvet, a lack of taste.

Ultimately that which is "superior" is entirely
subjective when talking about the aesthetic values of our
human perceptions.


Human perceptions in many areas seem to converge to a general area.

So what are you saying now Steve that you were not
suggesting that audiophiles were and always have been
wrong in their reports about audible differences between
CDPs?


Sometimes they are right, and sometimes they are wrong.


No argument there.


High end audiophiles are wrong about so many things, because their means for
judging are so chronically flawed. High end audiophilia is almost like a
parody.

They have been found
wrong when their claims are checked out by scientific
means, whether test equipment or well-run listening
tests.


"Scientific menas?" If so then ceetainly you can cite the
peer reviewed published data.


Been there, done that only to be met by a chorus of wails about the costs of
obtaining reprints of technical papers. I think you can buy about 100 or
more of them for the price of one single mid-priced high end turntable.


  #113   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

On Jul 14, 3:05*am, ScottW2 wrote:
On Jul 13, 5:24*pm, Scott wrote:

On Jul 13, 4:20 pm, ScottW2 wrote:


On Jul 13, 4:16 am, Scott wrote:


In an exchange of emails Dennis told me that this particular sonic
defect was CDP dependent. It was in those emails that he gave details
of level matching, time synching and DB protocols.


*This sounds like a test of CDPs ability to handle defective CDs with
high read error rates.


Again with the defects assertion. What was defective?


One of the laser burners. *You said that 2 of 3 systems worked fine.


No that isn't what I said at all. In fact I said nothing on the
matter. but this is what Dennis Drake said.
""Upon further investigation, it turned out that the plant had three
different laser
beam recorders and that one of them sounded different than the other
two."
All three were *different* but none of them were ever said to be
"defective." In fact for all we know thouasnds of titles were cut on
the one burner that produced colored sounding CDs on certain players.
Lesser in quality does nor equate defective. For all we know that
burner was opperating exactly up to it's full capacity and was
considered at that time "by typical measurements" to be working
propperly.


That is they produced CDs that sounded the same on all CDPs.



We don't even know that. They sounded the same on all the CDPs that
Dennis Drake used for his later comparisons. I'm pretty sure Dennis
did not test every make and model of CDP past and present to that day.
Nor do I suspect that he even tested a substantial sample.


The 3rd produced CDs that sounded fine on some players and not so fine
on others. *That tells me that unit was defective in that it produced
marginal
CDs that would not play without audible degradation on some CDPs.



In the same report he talks about th colorations of all but one A?D
converter. Does that mean that all those other widely used A.D
converters were also "defective?" Are you suggesting that all this CD
are either universlly transparent or defective?


How were the CDs
defective? An error in the pressing? How does that happen? How does
this play "better" on one player and not another?


Let's see...it could be all optics are not created equal, or all error
correction
is not created equal.


Inequities are no surprise. That's what the crazy subjectivists have
been claiming from the get go. It is also something that some people
claim have never been a concern in CD playback. inequities are not
always divided by defective and nondefective.



Dennis indicated
that on some CDPs the so called 'defective" discs played perfectly.


Those players optics could handle deficient CDs or they had better
error correction.


IOW they were better sounding CDPs with certian CDs. And who knows how
many of those discs were released into the commercial market? Do we
have any reason to think that Dennis Drake's rigor in persuit of sound
quality was the norm in commerical CD production? I'll bet it was and
is very much the exception.



How can a defective disc ever play perfectly?


You've never heard a scratched CD play perfectly?


Have you ever heard a scratched CD sound thin as opposed to just
skipping or stopping? Not the same thing here. Dennis described
inferior colored sound not skips or stops.



*IME, CDs have to be
rather badly damaged to not play perfectly on a decent player.



IYE what sort of damage leads to the sound that Dennis Drake observed?

  #114   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Harry Lavo" wrote in message

"Steven Sullivan" wrote in message
...


ABX does not preclude longer-term listening. The sounds
being compared can last as long as you
like (though there are good reasons to favor short
samples).


It's the switching itself that should be made
'instantaneous' if possible...the interval
of dead air between A and B (and X).


The need to make a forced choice, and to do seventeen
trials, almost universally leads to short snippets.


First off, the need to evenually make a choice is obviously *not* a proble.

The claim that the choice is forced might be a play on words, because ABX is
known among technical specialists as a 2AFC or two-alternative, forced
choice test.

The only necessary forcing of choice involves the listener eventually having
to become comfortable enough with his situation to make a choice.

It is unclear how a person who can't make a choice among a short list of
alternatives would get along in life. Maybe their moma makes them do it? ;-)

It may be human nature, but it is a real effect.


No, the use of short samples is simply a well-known means for easing the
production of the highest possible scores. If one studies recent science
related to how the human brain perceives sound and particularly music
(please see the Bibliography of "This is your Brain on Music"), one finds
that our ability to remember subtle details about sounds fades rapidly - in
about 2 to 10 seconds.

Therefore if you listen to a piece of music that is longer than from 2 to 10
seconds, you have already forgotten many subtle details about the beginning
of the selection. It then becomes very difficult or impossible to compare
them to what you are hearing now.

This is just another example of how modern science and high end audiophile
tradition have been on a collision course for years, but only science seems
to know that it ever happened,


  #115   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Sonnova" wrote in message


Have you ever done a DBT test between a RedBook CD of a
particular title and, say, a high-resolution download
(24/96 or 24/192) of the same title? I have. They're
different. And the differences aren't subtle. The
high-resolution download wins over the CD every time (so
far).


That is an evaluation with a rather obvious flaw - there is rarely reliable
evidence that the production steps for various versions of the same title
are otherwise identical.

I've gone several steps beyond that:

(1) I have produced any number of 24/96 recordings of my own, and compared
them to downsampled versions of themselves.

(2) I have any number of 24/96, SACD, and 24/192 commercial recordings and
private recordings produce by others, which I have compared to themselves as
above.

To this day there is no conventionally-obtained evidence
that shows that the new formats had any inherent audible
benefits at all, the products never were accepted in the
mainstream, and many of the record company executives
that bet their careers on the new formats lost their
jobs.


That's simply not true, Arny. High resolution recordings
in either PCM or DSD sound significantly better than
RedBook CD and carefully set-up DBT testing has
demonstrated that to my satisfaction (Levels matched as
closely as instrumentation will allow, time sync'd
between, for instance, two identical players one playing
the SACD layer and the other playing the RedBook layer
or, one of my own recordings is played back from my
master which level matched and sync'd to a CD burned from
that master using Logic Studio or Cubase 4).


I see no reliable evidence of that, I have tried similar experiements with
"no differences" results, I have circulated sets of recordings to the
general public with "no differences" results, and there is an extant but
unrebutted JAES article (peer reviewed) that recounts similar results that
is now about a year old.



  #116   Report Post  
Posted to rec.audio.high-end
Norman Schwartz Norman Schwartz is offline
external usenet poster
 
Posts: 28
Default You Tell 'Em, Arnie!

On Jul 13, 8:43*pm, Steven Sullivan wrote:
Norman Schwartz wrote:
On Jul 13, 10:08?am, "Arny Krueger" wrote:
"Harry Lavo" wrote in message




And I'm talking about perceiving differences in audio
reproduction equipment when reproducing music, as
evaluated using ABX.


ABX is known to work very well.


Where's the beef?


Some listeners, including myself, feel that a period of longer term
listening (at least several hours) is required to reveal itself. E.g.,
could it possibly be that certain distortion characteristics are not
apparent nor find oportunity to 'grate' during instantaneous type
comparisons?


THis must have been noted dozens of times by now in the history of RAHE, but:

ABX does not preclude longer-term listening. The sounds being compared can last as long as you
like (though there are good reasons to favor short samples).

It's the switching itself that should be made 'instantaneous' if possible...the interval
of dead air between A and B (and X).


So then for longer periods, it be suggested that I spend 3 or more
hours listening to"X", immediately followed by 3 or more additional
hours listening to "Y"? What are the chances that boredom, listener
fatigue, exhaustion and the intervals for eating/drinking and
relieveing oneself would enter the comparison?

-S

  #117   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default You Tell 'Em, Arnie!

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message

"Steven Sullivan" wrote in message
...


ABX does not preclude longer-term listening. The sounds
being compared can last as long as you
like (though there are good reasons to favor short
samples).


It's the switching itself that should be made
'instantaneous' if possible...the interval
of dead air between A and B (and X).


The need to make a forced choice, and to do seventeen
trials, almost universally leads to short snippets.


First off, the need to evenually make a choice is obviously *not* a
proble.

The claim that the choice is forced might be a play on words, because ABX
is
known among technical specialists as a 2AFC or two-alternative, forced
choice test.

The only necessary forcing of choice involves the listener eventually
having
to become comfortable enough with his situation to make a choice.

It is unclear how a person who can't make a choice among a short list of
alternatives would get along in life. Maybe their moma makes them do it?
;-)

It may be human nature, but it is a real effect.


No, the use of short samples is simply a well-known means for easing the
production of the highest possible scores. If one studies recent science
related to how the human brain perceives sound and particularly music
(please see the Bibliography of "This is your Brain on Music"), one finds
that our ability to remember subtle details about sounds fades rapidly -
in
about 2 to 10 seconds.

Therefore if you listen to a piece of music that is longer than from 2 to
10
seconds, you have already forgotten many subtle details about the
beginning
of the selection. It then becomes very difficult or impossible to compare
them to what you are hearing now.

This is just another example of how modern science and high end audiophile
tradition have been on a collision course for years, but only science
seems
to know that it ever happened,


I'm not going to revisit the entire case against short snippets....only to
say that practice in audiometrics has shown that to use ABX effectively, one
must know what one is listening for and be trained to pick out and identify
that artifact. That is very diffent from listening to music, whereby subtle
distinctions can enter consciousness at a threshold level only under
holistic conditions....the focus on cognitive differences (direct
intervention) works against that state of musical consciousness....and short
snippets make this matter even worse.

  #118   Report Post  
Posted to rec.audio.high-end
WVK WVK is offline
external usenet poster
 
Posts: 3
Default You Tell 'Em, Arnie!

--
Best regards,
Wayne Van Kirk
ACP International
713-641-6413
http://www.acpinternational.com/
"Sonnova" wrote in message
...
On Mon, 13 Jul 2009 19:34:17 -0700, Scott wrote
(in article ):

On Jul 13, 11:32 am, "Arny Krueger" wrote:
"Scott" wrote in message



I said;
"You seem to have been claiming that standard
measurements predict that all CDPs sound the same"

There are goodly number of CD players, whether by design or due to
partial
failure, produce signals that are so degraded that they will even sound
different.


So they don't all sound the same. No argument there. I have heard
differences. Heck it was the common claim that there were no
differences that lead me to buy an inferior product the first time
out. Oh well. Lesson learned. Don't pay attention to nonsense like
"Audiophiles routinely claim audible difference among classes of
devices whose typical measured performance does not predict audible
difference -- CDPs and cables, for example. (assuming level-matching
for output devices, of course). " Clearly alleged typical meausred
performance" doesn't tell us jack about any given product's actual
sound.


True for active devices like CDPs, false for passive conductors like
Interconnects and cables. There is simply NO way a properly made cable or
interconnect can have a "sound". If it does its because the manufacturer
PURPOSELY added components to those cables to alter their frequency
response
and that sound is subtracting fidelity from the music being played, not
adding fidelity to it. I.E. If a cable or interconnect changes the sound
of
one's system, it is NOT in a good way. At any rate who wants to spend
hundreds of dollars for a set of "fixed" tone controls?


I have heard a obvious difference with interconnects. An audiophile friend
switched from one to the other.
One made, a not so good mono recording (Byrd's), duller the other sparkled
by comparison.
Not a scientific test but I believe that the differences were great enought
to be measured.

WVK

  #119   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

On Jul 14, 6:28*am, "Arny Krueger" wrote:
"Scott" wrote in message



On Jul 13, 11:32 am, "Arny Krueger"
wrote:
"Scott" wrote in message




I said;
"You seem to have been claiming that standard
measurements predict that all CDPs sound the same"


There are goodly number of CD players, whether by design
or due to partial failure, produce signals that are so
degraded that they will even sound different.

So they don't all sound the same.


Right, the defective ones either sound different or are so defective that
they don't make a signal at all.


So perhaps you could tell us what sort of defects in the burner at tha
plant would lead a CD to sound thinner on certain CDPs and not others?
Of course to me the big question is how does this "defect" go
undetected at a major CD producing plant and how many "defective" CDs
have entered the market place due to this one type of defect thatthe
plant missed when they knew the quality of their product was being
scrutinized? Then we have to ask how many other sorts of defects have
been missed over the years of commercial CD and CDP production? I mean
if different sound means there are defects that would suggest that all
but one A/D converter tested by Dennis Drake was "defective." My god
how many of those converters were routinely used in the mastering of
commercial CDs? For all we know the rate of thse so called "defects"
may have been nothing short of pandemic in the production of
commercial CDs and CDPs. It's no wonder so many audiophiles that
didn't buy into perfect sound forever found fault with so many CDs and
CDPs and were always looking for improvement.




No argument there.


Well, its common sense that broken things don't work right, and working
right for a CD player means sounding exactly like every other CD player that
is working right, all other things being equal which they frequently aren't.


Well that raises a big question. Given that the plant that supplied
the "defective" disc to dennis to scrutinize did so with the on the
line burner one has to wonder just how many CDs and CDPs weren't
"defective" over the years.




I have heard differences.


Without more reliable data, that means nothing.


No. It has meaning.



Heck it was the common claim that
there were no differences that lead me to buy an inferior
product the first time out.


If the product was actually inferior...



Sounded worse to me so that makes it inferior to me.



Oh well. Lesson learned.


I'm unsure of that.

Don't pay attention to nonsense like "Audiophiles
routinely claim audible difference among classes of
devices whose typical measured performance does not
predict audible difference -- CDPs and cables, for
example. (assuming level-matching for output devices, of
course). " Clearly alleged typical meausred performance"
doesn't tell us jack about any given product's actual
sound.


I don't see any reliable evidence that supports any of those conclusions.



And yet you have confirmed variations in sound between CDs and CDPs.
You simply like to call inferior product defective for whatever
reason.



"Audiophiles routinely claim audible difference among
classes of devices whose typical measured performance
does not predict audible difference -- CDPs and cables,
for example. (assuming level-matching for output
devices, of course). "


Agreed.

OK........


But, the reasons are generally trivial.


Some folks in our hobby don't consider sound quality to be trivial.



Furthermore, audiophiles routinely claim audible
superiority for equipment that has audible faults, some
of which even they admit that they hear.

One person's 'fault" is another person's virtue.


However, there is something like 99% agreement about certain old-technology
audible colorations and distortions being faults.



Do show us the controlled listening tests that confirm this assertion.
I think in this case the hidden reference would have to be live music.
you might want to talk to james Boyk when gather ing this data since
he is the only one I know of who has done such tests. I don;t think
you are going to like the results though.



*Depends
on your aesthetic priorities, goals and references.


....or like totally tasteless, garish cheap paintings of nudes or Elvis on
velvet, a lack of taste.



How does an afinity for velvet Elvis paintings say anything about
one's aesthetic goals, priorities and references in audio?




Ultimately that which is "superior" is entirely
subjective when talking about the aesthetic values of our
human perceptions.


Human perceptions in many areas seem to converge to a general area.




perhaps it may "seem" that way to you. It matters not. If one is not
part of that convergence they are no less a human being and deserve no
less in seeking satisfaction in audio.



So what are you saying now Steve that you were not
suggesting that audiophiles were and always have been
wrong in their reports about audible differences between
CDPs?
Sometimes they are right, and sometimes they are wrong.

No argument there.


High end audiophiles are wrong about so many things, because their means for
judging are so chronically flawed. *High end audiophilia is almost like a
parody.



I suppose so. We saw this illustrated in a recent account of an
alleged superiority of European LPs over American LPs from the 60s.
but so what. If you like something you like something even if your
methodologies are not rigorous.



They have been found
wrong when their claims are checked out by scientific
means, whether test equipment or well-run listening
tests.

"Scientific menas?" If so then ceetainly you can cite the
peer reviewed published data.


Been there, done that only to be met by a chorus of wails about the costs of
obtaining reprints of technical papers. I think you can buy about 100 or
more of them for the price of one single mid-priced high end turntable.


Hmm no citation just more posturing in the name of science. that was
what I expected. No citations. No real science in support of your
assertions. thank you.

  #120   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default You Tell 'Em, Arnie!

On Tue, Jul 14, 2009 at 12:24:02AM +0000, Steven Sullivan wrote:
Scott wrote:
On Jul 13, 4:15??am, Steven Sullivan wrote:
Scott wrote:

Nope. Straw man. ?And you should know better.
here are your words from this thread. "Audiophiles routinely claim
audible difference among classes of devices whose typical measured
performance does not predict audible difference -- CDPs and cables,
for example. (assuming level-matching for output devices, of course)."
You might want to check thse things before crying strawman. (note for
moderator: I am leaving all quotes in tact for sake of showing that
these were Steve's words in context)

What does the word 'typical' mean to you, Scott? Does it mean 'all'?



Main Entry:typ??i??cal
Pronunciation:\??ti-pi-k??l\
Function:adjective
Etymology:Late Latin typicalis, from typicus, from Greek typikos, from
typos model ??? more at type
Date:1609
1: constituting or having the nature of a type : symbolic
2 a: combining or exhibiting the essential characteristics of a group
typical suburban houses b: conforming to a type a specimen typical
of the species


Yay Google. Now, Scott, answer the question: does 'typical' mean 'all'?
E.g.:

"Typical suburban houses have a front lawn"
vs
"All suburban houses have a front lawn."


Please now and forever stop claiming the me, Arny, or any of the
other people you argue with about this over and over, claim that
*All* (X) sound *the same*. Thanks.


How can I stop something I am not doing? What does the word
"standard"
mean to you Steve? Is it something radically different than typical?



No, but they're both indubitably different from 'all'...which is what
you claim I'm saying.

So I ask again, please stop attributing views I never have espoused, and
never would espouse, to me.


After all this is what I said;
"You seem to have been claiming that standard meausurements predict
that all CDPs sound the same"
Your words once again....
"Audiophiles routinely claim audible difference among classes of
devices whose typical measured performance does not predict audible
difference -- CDPs and cables, for example. (assuming level-matching
for output devices, of course). "


So what are you saying now Steve that you were not suggesting that
audiophiles were and always have been wrong in their reports about
audible differences between CDPs?


No, I was not suggesting that they were and are *always* wrong... of course even a sighted
comparison can turn out to be 'right' , but it requires other methods to determine
it.

(Btw, 'routinely' doesn't mean 'always', either.)

Sure looks like that was what you
were saying.


Scott, maybe you aren't exactly the best judge of these things. Maybe others here
can chime in and say whether they had as much difficulty parsing my use of the
word 'typical' as you seem to.

Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Top 100 Reasons For Despising Arnie Jenn[_2_] Audio Opinions 1 February 5th 09 01:22 AM
About Arnie K tubeguy Audio Opinions 184 July 22nd 05 07:40 AM
rec.audio.Arnie.Krueger Willi Audio Opinions 10 March 3rd 05 02:26 PM
*Thank Heaven For Arnie Kroo* Le Lionellaise Audio Opinions 0 September 15th 03 01:44 AM


All times are GMT +1. The time now is 04:46 PM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"