Reply
 
Thread Tools Display Modes
  #121   Report Post  
Posted to rec.audio.high-end
vlad vlad is offline
external usenet poster
 
Posts: 131
Default You Tell 'Em, Arnie!

On Jul 14, 4:39*pm, Norman Schwartz wrote:

. . .

So then for longer periods, it be suggested that I spend 3 or more
hours listening to"X", immediately followed by 3 or more additional
hours listening to "Y"? What are the chances that boredom, listener
fatigue, exhaustion and the intervals for eating/drinking and
relieveing oneself would enter the comparison?


Do you apply this argument to both DBT and Sighted tests? Or is it
only the problem for DBT? Is it a universal argument against prolonged
listening in general?

It seems to me that 'holistic' sighted evaluation is plagued by "
boredom, listener fatigue, exhaustion and the intervals for eating/
drinking" too.

So should we limit ourselves to short snippets only?

vlad
  #122   Report Post  
Posted to rec.audio.high-end
Sonnova Sonnova is offline
external usenet poster
 
Posts: 1,337
Default You Tell 'Em, Arnie!

On Tue, 14 Jul 2009 16:39:28 -0700, Arny Krueger wrote
(in article ):

"Sonnova" wrote in message


Have you ever done a DBT test between a RedBook CD of a
particular title and, say, a high-resolution download
(24/96 or 24/192) of the same title? I have. They're
different. And the differences aren't subtle. The
high-resolution download wins over the CD every time (so
far).


That is an evaluation with a rather obvious flaw - there is rarely reliable
evidence that the production steps for various versions of the same title
are otherwise identical.

I've gone several steps beyond that:

(1) I have produced any number of 24/96 recordings of my own, and compared
them to downsampled versions of themselves.

(2) I have any number of 24/96, SACD, and 24/192 commercial recordings and
private recordings produce by others, which I have compared to themselves as
above.

To this day there is no conventionally-obtained evidence
that shows that the new formats had any inherent audible
benefits at all, the products never were accepted in the
mainstream, and many of the record company executives
that bet their careers on the new formats lost their
jobs.


That's simply not true, Arny. High resolution recordings
in either PCM or DSD sound significantly better than
RedBook CD and carefully set-up DBT testing has
demonstrated that to my satisfaction (Levels matched as
closely as instrumentation will allow, time sync'd
between, for instance, two identical players one playing
the SACD layer and the other playing the RedBook layer
or, one of my own recordings is played back from my
master which level matched and sync'd to a CD burned from
that master using Logic Studio or Cubase 4).


I see no reliable evidence of that, I have tried similar experiements with
"no differences" results, I have circulated sets of recordings to the
general public with "no differences" results, and there is an extant but
unrebutted JAES article (peer reviewed) that recounts similar results that
is now about a year old.


Well I can't help that. I and everyone involved in the DBT tests could not
only hear when the high-resolution copy was playing, but could hear it
easily, every time.
  #123   Report Post  
Posted to rec.audio.high-end
Dick Pierce Dick Pierce is offline
external usenet poster
 
Posts: 33
Default You Tell 'Em, Arnie!

On Jul 14, 7:39*pm, Norman Schwartz wrote:
So then for longer periods, it be suggested that I spend 3 or more
hours listening to"X", immediately followed by 3 or more additional
hours listening to "Y"?


Doesn't work that way the detailed auditory memory
is only good for a few SECONDS. You might retain
the auditory details of the last few seconds of "Y",
but the details you'd want to compare them to in
"X" are LONG gone.

What are the chances that boredom, listener
fatigue, exhaustion and the intervals for eating/drinking and
relieveing oneself would enter the comparison?


All of them simply make the situation worse.

  #124   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default You Tell 'Em, Arnie!

On Jul 14, 7:44*pm, "Harry Lavo" wrote:

I'm not going to revisit the entire case against short snippets....only to
say that practice in audiometrics has shown that to use ABX effectively, one
must know what one is listening for and be trained to pick out and identify
that artifact.


That is true of any listening test. Your best chance of hearing
something is knowing what you're listening for. The idea that you
might be MORE likely to hear something if you didn't know what it was
is silly.

Of course, that's the whole theory behind your pseudo-test, Harry, so
I guess you're stuck with it.

bob

  #125   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default You Tell 'Em, Arnie!

"bob" wrote in message
...
On Jul 14, 7:44 pm, "Harry Lavo" wrote:

I'm not going to revisit the entire case against short snippets....only
to
say that practice in audiometrics has shown that to use ABX effectively,
one
must know what one is listening for and be trained to pick out and
identify
that artifact.


That is true of any listening test. Your best chance of hearing
something is knowing what you're listening for. The idea that you
might be MORE likely to hear something if you didn't know what it was
is silly.

Of course, that's the whole theory behind your pseudo-test, Harry, so
I guess you're stuck with it.

bob


However, when you are asked to particpate in an ABX test evaluating short
snippets of music, you don't have a clue intitially as to what you are
listening for nor a normal framework for listening. One reason IMO these
tests immediately lead to a sense of strain and tension.

What we normally do when evaluating something new is to measure it aurually
"on the fly" so to speak against our cumulative life's experience of how it
would/should sound if real. If we are doing more relaxed long term testing,
we are more likely to sense/find those things that seem out of synch with
this standard. That is quite different than comparing two snippets of sound
from two differnt devices, head on.




  #126   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default You Tell 'Em, Arnie!

Norman Schwartz wrote:
On Jul 13, 8:43?pm, Steven Sullivan wrote:
Norman Schwartz wrote:
On Jul 13, 10:08?am, "Arny Krueger" wrote:
"Harry Lavo" wrote in message




And I'm talking about perceiving differences in audio
reproduction equipment when reproducing music, as
evaluated using ABX.


ABX is known to work very well.


Where's the beef?


Some listeners, including myself, feel that a period of longer term
listening (at least several hours) is required to reveal itself. E.g.,
could it possibly be that certain distortion characteristics are not
apparent nor find oportunity to 'grate' during instantaneous type
comparisons?


THis must have been noted dozens of times by now in the history of RAHE, but:

ABX does not preclude longer-term listening. The sounds being compared can last as long as you
like (though there are good reasons to favor short samples).

It's the switching itself that should be made 'instantaneous' if possible...the interval
of dead air between A and B (and X).


So then for longer periods, it be suggested that I spend 3 or more
hours listening to"X", immediately followed by 3 or more additional
hours listening to "Y"? What are the chances that boredom, listener
fatigue, exhaustion and the intervals for eating/drinking and
relieveing oneself would enter the comparison?


I'm not the one suggesting 3 or more hours of listening. It's not
advisable, but

I'm saying that if you are *comparing* X and Y , leaving a large interval
between X and Y works against discrimination of subtle difference.

THis is based on psychoacoustic research,
Your argument isn't with me, it's with human physiology and
psychology.

If you want to listen for hours at a time, with X and Y separated by
minutes or hours or days, go ahead; if your ABX results are bo better
than chance, I'd suggest you try more 'traditional' protocols
(which also include training to hear artifacts if possible).

--
-S
We have it in our power to begin the world over again - Thomas Paine
  #127   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Norman Schwartz" wrote in message

On Jul 13, 8:43 pm, Steven Sullivan
wrote:
Norman Schwartz wrote:
On Jul 13, 10:08?am, "Arny Krueger"
wrote:
"Harry Lavo" wrote in message




And I'm talking about perceiving differences in audio
reproduction equipment when reproducing music, as
evaluated using ABX.


ABX is known to work very well.


Where's the beef?


Some listeners, including myself, feel that a period of
longer term listening (at least several hours) is
required to reveal itself. E.g., could it possibly be
that certain distortion characteristics are not
apparent nor find oportunity to 'grate' during
instantaneous type comparisons?


THis must have been noted dozens of times by now in the
history of RAHE, but:

ABX does not preclude longer-term listening. The sounds
being compared can last as long as you like (though
there are good reasons to favor short samples).

It's the switching itself that should be made
'instantaneous' if possible...the interval
of dead air between A and B (and X).


So then for longer periods, it be suggested that I spend
3 or more hours listening to"X", immediately followed by
3 or more additional hours listening to "Y"?


Take days, if that is what you need. The original ABX Comparator had a
battery backup to support this kind of test.

What are the chances that boredom, listener fatigue, exhaustion and
the intervals for eating/drinking and relieveing oneself
would enter the comparison?


Take a break and come back to your listening when you feel refreshed.

Listen until you are satisified that you've had every opportunity to do your
best. Why not?

-S


  #128   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Sonnova" wrote in message

On Tue, 14 Jul 2009 16:39:28 -0700, Arny Krueger wrote
(in article ):

"Sonnova" wrote in message


Have you ever done a DBT test between a RedBook CD of a
particular title and, say, a high-resolution download
(24/96 or 24/192) of the same title? I have. They're
different. And the differences aren't subtle. The
high-resolution download wins over the CD every time (so
far).


That is an evaluation with a rather obvious flaw - there
is rarely reliable evidence that the production steps
for various versions of the same title are otherwise
identical.

I've gone several steps beyond that:

(1) I have produced any number of 24/96 recordings of my
own, and compared them to downsampled versions of
themselves.

(2) I have any number of 24/96, SACD, and 24/192
commercial recordings and private recordings produce by
others, which I have compared to themselves as above.

To this day there is no conventionally-obtained
evidence that shows that the new formats had any
inherent audible benefits at all, the products never
were accepted in the mainstream, and many of the
record company executives that bet their careers on
the new formats lost their jobs.


That's simply not true, Arny. High resolution recordings
in either PCM or DSD sound significantly better than
RedBook CD and carefully set-up DBT testing has
demonstrated that to my satisfaction (Levels matched as
closely as instrumentation will allow, time sync'd
between, for instance, two identical players one playing
the SACD layer and the other playing the RedBook layer
or, one of my own recordings is played back from my
master which level matched and sync'd to a CD burned
from that master using Logic Studio or Cubase 4).


I see no reliable evidence of that, I have tried similar
experiements with "no differences" results, I have
circulated sets of recordings to the general public with
"no differences" results, and there is an extant but
unrebutted JAES article (peer reviewed) that recounts
similar results that is now about a year old.


Well I can't help that. I and everyone involved in the
DBT tests could not only hear when the high-resolution
copy was playing, but could hear it easily, every time.


If I read what you say correctly, you didn't have the same assurance that
the samples differed only in terms of sample rate, as my tests, and the JAES
tests did.

It is a matter of public record that many are very upset by the JAES
article, but after a year of complaining, nobody has come forth with a test
that says otherwise. This is, IME one of the very easiest of DBTs to set up.
I presume that the absence of contrary results is not due to lack of trying.

  #129   Report Post  
Posted to rec.audio.high-end
vlad vlad is offline
external usenet poster
 
Posts: 131
Default You Tell 'Em, Arnie!

On Jul 14, 3:05*am, "Harry Lavo" wrote:
"vlad" wrote in message

...



Harry,


. . .

* *Your 'monadic' test has one huge flow - it brings a lot more
variables into the test that are not under your control but they
definitely influence outcome of the test.


* *For instance, if the test takes more then one day then temperature
and humidity of the air will definitely affect physical abilities and
mood of your subjects and *become parameters of the test too. Even the
weather itself becomes a parameter. In sunny weather people react to
the music differently then in a cloudy weather.The quality of the food
from day to day (you know, some of them can have problems with
indigestion) can become a real issue too. *I am sure there are many
other things that are not under your control.


* *Another thing is that you have no control over random guesses vs.
real recognition. But I better not to open this can of warms :-)


my $.02 worth


vlad


I appreciate your concern, Vlad, but it is misplaced.

Research designers try to anticipate and take into account significant
possible intervening variables. *In such a monadic test, no doubt the
variable would be changed from one session to the next so that any point in
time, the sampling would be roughly 50-50. *Musical segments would be
rotated within samples so there is no order bias, etc. etc.


I don't understand what you mean. Are you going to ran different
tests every day? Or one test can spread over several days. In this
case you are running into a problem of changing environments.


When you have a large sample size and randomly chosen and matched samples,
you don't worry about a few random guesses. *The fact is, their is a very
well developed set of statistical operations that take into account the
"degree" of difference between the ratings of the two samples. *A different
standard applies depending on the number of scaler points used, whether the
scalars are symetrical or not, etc etc. *And the significance level is
determined by sample size and the shape of the distribution curves as it
determines standard deviation and standard error.

And if you really want to worry about random guesses, worry about an ABX
test where a change in one sample can determine whether or not the test is
judged signicant, and there are NO controls against random guessing to
create a virtually guaranteed "null" effect.


I have Ph.D. in Math and I don't understand your paragraph above. Can
you be more specific? The way as it worded does not make much sense to
me.

vlad

  #130   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"ScottW2" wrote in message

On Jul 14, 6:44 pm, Dick Pierce
wrote:
On Jul 14, 7:39 pm, Norman Schwartz
wrote:

So then for longer periods, it be suggested that I
spend 3 or more hours listening to"X", immediately
followed by 3 or more additional hours listening to "Y"?


Doesn't work that way the detailed auditory memory
is only good for a few SECONDS. You might retain
the auditory details of the last few seconds of "Y",
but the details you'd want to compare them to in
"X" are LONG gone.


No amount of training can overcome this memory
deficiency?


So it seems. I've been listening critically in blind tests for over 30 years
and I've never been able to beat the way my brain is wired. After about 2
seconds, my memory for small detail is severely attenuated. I've suspected
for a long time that this was normal because I observed its effects in quite
a few other people. The scientific literature now agrees.

If there is a detailed memory for sonic details that works like a detailed
photographic memory works for just a few people, AFAIK it has never been
found.

My second son has a detailed photographic memory for visual objects, but
testing has shown that has no similar capability for sound. BTW, he says
that his detailed photographic memory turned out to be problematical and had
to be overcome, because in the short term (a day) it could be a substitute
for learning, but in the long term (a life) it was not. So he could use his
photographic memory to pass tests, but not to have useful knowlege say next
semester.

It is possible that a long term memory for sonic objects would be more
problematical than a detailed visual memory, and so it never evolved.
Detailed visual memory is relatively rare.

Note that musical savants with amazing memories for music remember just the
music, not small sonic details. Also note that musical savants generally
don't play with the same level of emotional expression as the best musicians
are capable of.





  #131   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Harry Lavo" wrote in message


However, when you are asked to participate in an ABX test
evaluating short snippets of music, you don't have a clue
intitially(sic) as to what you are listening for nor a normal
framework for listening.


This is an outrageous and absolutely ludicrous and insulting statement.

I guess we can discern from this statement that the writer believes that he
has audited *every* ABX test that has ever done in the past 30 years so that
he can make such a totally global statement.

This appears to be a false claim that in every ABX test ever done, someone
made sure that whatever devious steps were necessary to *hide* from the
listener what they were supposed to listening for, were taken. This claim
taxes reason and any idea of normal human behavior.

Of course a test coordinator will tell the listener what to listen for. Why
wouldn't he?

Of course, there have been a few occasions where a blind test was contrived
where people weren't told what to listen for. Even this makes some sense
when the listener proudly announces that he can easily hear the effect that
is involved, which is often the case.

At the old ABX web site I provided samples of music where the artifact to be
listened for was blatantly obvious, and then provided it in more subtle
forms for the purpose of the actual test.

When working with people in person, it has always been the practice to talk
about the artifact to be listened for, and try to demonstrate it in a
sighted evaluation, before commencing with the DBT.



  #132   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Harry Lavo" wrote in message


I'm not going to revisit the entire case against short
snippets....only to say that practice in audiometrics has
shown that to use ABX effectively, one must know what one
is listening for and be trained to pick out and identify
that artifact.


The audiometric ABX test is a significantly different test from the one that
we use to evaluate audio products.

This has been covered here several times. I don't know it has to be said so
many times.

Also the above is another example of ABX critics implying that a well-known
property of human hearing applies *only* to ABX.

The most true statement is: To most effectively detect an audible event,
one must know what one listening for and be trained to pick out and identify
that event.

This is just common sense - its like saying that if you want to find
something, it helps to know what it looks like.

The statement above about detecting audible events can be spun deceptively
by simply plugging in the name of something that one wishes to criticize.
It's even true of sighted evaluations.


  #133   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"ScottW2" wrote in message


But I would say that DACs should also be capable of
producing bit identical output from the same analog source if
properly sync'd. A more difficult test which given the
scarcity of detail will most likely remain unknown.



It seems to me that an ADC/DAC pair that is capable of a bit or two more
resolution than the recording used for the test should be able to pass it
with utter perfection.

IOW, an ADC/DAC pair with 108 or more DB dynamic range should be able to
pass a 16 bit recording extended to 24 bits by adding zeros.

When re-recorded and downsampled to 16 bits, we should be able to have a
bit-perfect copy of the original.

A caveat - the clocking of the ADC/DAC pair would need to be very precisely
matched and tuned, or some averaging of samples might take place.


  #134   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

On Jul 14, 5:45*pm, Steven Sullivan wrote:
On Tue, Jul 14, 2009 at 12:24:02AM +0000, Steven Sullivan wrote:
*Scott wrote:
*On Jul 13, 4:15??am, Steven Sullivan wrote:
* Scott wrote:


* * Nope. Straw man. ?And you should know better.
* *here are your words from this thread. "Audiophiles routinely claim
* *audible difference among classes of devices whose typical measured
* *performance does not predict audible difference -- CDPs and cables,
* *for example. (assuming level-matching for output devices, of course)."
* *You might want to check thse things before crying strawman. (note for
* *moderator: I am leaving all quotes in tact for sake of showing that
* *these were Steve's words in context)


* What does the word 'typical' mean to you, Scott? Does it mean 'all'?


Why do you ask? Did I say anything about "all" measurements? You do
realize that you used the word 'typical" in reference to the
measureements and I used the word "standard" in it's place. So the
meaningful question would are "typical" and "standard" interchangable
in this particular case. I say yes.

What does the word
*"standard"
* mean to you Steve? Is it something radically different than typical?


*No, but they're both indubitably different from 'all'...which is what
*you claim I'm saying.


Steve reread the quote. Reread your own words. Your use of the word
"typical" was in reference to measurements not the CDPs. I used the
word "standard" instead of "typical." I think that was an acceptable
change.




*After all this is what I said;
*"You seem to have been claiming that standard meausurements predict
* that all CDPs sound the same"
*Your words once again....
*"Audiophiles routinely claim audible difference among classes of
*devices whose typical measured performance does not predict audible
*difference -- CDPs and cables, for example. (assuming level-matching
*for output devices, of course). "
*So what are you saying now Steve that you were not suggesting that
*audiophiles were and always have been wrong in their reports about
*audible differences between CDPs?


*No, I was not suggesting that they were and are *always* wrong... of course even a sighted
*comparison can turn out to be 'right' , but it requires other methods to determine
*it.

*(Btw, 'routinely' doesn't mean 'always', either.)


"Routinely" was used in reference to audiophile behavior (claims of
audible differences) not the sameness in sound of CDPs. You really
need to keep track of what words you used to describe what phenomenon.
I'll give you a run down. "Typical" described the measurements.
"Routinely" described the alleged actions of audiophiles. You claimed
that the "typical" measurements *do not predict an audible difference*
among certain classes of components and included CDPs among those
classes. so you are now saying that there are in some cases audible
differences among components in these "classes of devices" despite the
predictions of "typical meausrements?"


OK then what was youre point?


*Scott, maybe you aren't exactly the best judge of these things. *Maybe others here
*can chime in and say whether they had as much difficulty parsing my use of the
word 'typical' *as you seem to


I had no trouble with your use of the word "typical." perhaps you are
having trouble with it. Looked to me that you were trying to use
alleged "predictions" on sound based on "typical" meausrments to
disprove "routine" audiophile claims. But since you now admit that
sometimes they are right, I just don't see any point to your comment.

  #135   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default You Tell 'Em, Arnie!

"vlad" wrote in message
...
On Jul 14, 3:05 am, "Harry Lavo" wrote:
"vlad" wrote in message

...



Harry,


. . .

Your 'monadic' test has one huge flow - it brings a lot more
variables into the test that are not under your control but they
definitely influence outcome of the test.


For instance, if the test takes more then one day then temperature
and humidity of the air will definitely affect physical abilities and
mood of your subjects and become parameters of the test too. Even the
weather itself becomes a parameter. In sunny weather people react to
the music differently then in a cloudy weather.The quality of the food
from day to day (you know, some of them can have problems with
indigestion) can become a real issue too. I am sure there are many
other things that are not under your control.


Another thing is that you have no control over random guesses vs.
real recognition. But I better not to open this can of warms :-)


my $.02 worth


vlad


I appreciate your concern, Vlad, but it is misplaced.

Research designers try to anticipate and take into account significant
possible intervening variables. In such a monadic test, no doubt the
variable would be changed from one session to the next so that any point
in
time, the sampling would be roughly 50-50. Musical segments would be
rotated within samples so there is no order bias, etc. etc.


I don't understand what you mean. Are you going to ran different
tests every day? Or one test can spread over several days. In this
case you are running into a problem of changing environments.


Realistically, such a test would be run over several day, with alternating
small groups of people. But the variable under test would be alternated (or
at least randomized) so that any environmental changes would affect both
monadic groups equally.




When you have a large sample size and randomly chosen and matched
samples,
you don't worry about a few random guesses. The fact is, their is a very
well developed set of statistical operations that take into account the
"degree" of difference between the ratings of the two samples. A
different
standard applies depending on the number of scaler points used, whether
the
scalars are symetrical or not, etc etc. And the significance level is
determined by sample size and the shape of the distribution curves as it
determines standard deviation and standard error.

And if you really want to worry about random guesses, worry about an ABX
test where a change in one sample can determine whether or not the test
is
judged signicant, and there are NO controls against random guessing to
create a virtually guaranteed "null" effect.


I have Ph.D. in Math and I don't understand your paragraph above. Can
you be more specific? The way as it worded does not make much sense to
me.


If you are doing an abx test, and have a bias ("read belief") that the thing
under test cannot or will not be audible, the human mind is perfectly
capable of making sure your answers are random. In a worse case scenario,
you can even consciously chose to simply randomize your "different" "no
different" choices, thus assuring the the ABX test returns a "no difference"
result. This is the one bias / potential fraud that ABX simply has no
controls for. For the test to be valid, you simply must WANT to hear a
difference.

So if somebody says "I (or they) ran ABX tests and the tests indicated "no
difference", one must ask who was in the test and what were their
predilictions towards the variable under test. That is why I take
self-proclaimed tests by "objectivists" with a grain of salt.




  #136   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default You Tell 'Em, Arnie!

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message


I'm not going to revisit the entire case against short
snippets....only to say that practice in audiometrics has
shown that to use ABX effectively, one must know what one
is listening for and be trained to pick out and identify
that artifact.


The audiometric ABX test is a significantly different test from the one
that
we use to evaluate audio products.

This has been covered here several times. I don't know it has to be said
so
many times.

Also the above is another example of ABX critics implying that a
well-known
property of human hearing applies *only* to ABX.

The most true statement is: To most effectively detect an audible event,
one must know what one listening for and be trained to pick out and
identify
that event.

This is just common sense - its like saying that if you want to find
something, it helps to know what it looks like.

The statement above about detecting audible events can be spun deceptively
by simply plugging in the name of something that one wishes to criticize.
It's even true of sighted evaluations.


The difference is, Arny, that in the opinions of pro-ABX experts when this
procedure is not followed in an ABX test, the test is likly to prove
inconclusive.


  #137   Report Post  
Posted to rec.audio.high-end
[email protected] pfjw@aol.com is offline
external usenet poster
 
Posts: 380
Default You Tell 'Em, Arnie!

On Jul 7, 10:40*pm, Sonnova wrote:

A lot of people get deluded this way, so you're not alone. But believe me if
you were to switch between your old cables and the new ones in a double-blind
evaluation, you would not be able to tell one cable from the other.


Don't leap to conclusions. It is entirely possible that his previous
cables have deteriorated physically such that there is some actual
interference with the sound. Especially with cheap (not necessarily
inexpensive) shielded cables the core wire can become corroded at the
connector(s). I had a set once that had enough copper-salts that it
rectified CB noise on occasion from passing trucks. Wild!

I will replace my cables every so often and clean the jacks for this
reason - but, again, with inexpensive but well-made cables, not
boutique stuff. Every-so-often is mostly based on the insulation
getting stiff, so 15+ years anyway, at least.

NOTE: This is a physical/electrical decay issue - NOT an oxygen-free
copper rolled on the thighs of virgins on Walpurges Night issue.

Peter Wieck
Melrose Park, PA

  #138   Report Post  
Posted to rec.audio.high-end
Sonnova Sonnova is offline
external usenet poster
 
Posts: 1,337
Default You Tell 'Em, Arnie!

On Tue, 14 Jul 2009 17:44:27 -0700, WVK wrote
(in article ):

On Mon, 13 Jul 2009 19:34:17 -0700, Scott wrote
(in article ):

On Jul 13, 11:32 am, "Arny Krueger" wrote:
"Scott" wrote in message



I said;
"You seem to have been claiming that standard
measurements predict that all CDPs sound the same"

There are goodly number of CD players, whether by design or due to
partial
failure, produce signals that are so degraded that they will even sound
different.

So they don't all sound the same. No argument there. I have heard
differences. Heck it was the common claim that there were no
differences that lead me to buy an inferior product the first time
out. Oh well. Lesson learned. Don't pay attention to nonsense like
"Audiophiles routinely claim audible difference among classes of
devices whose typical measured performance does not predict audible
difference -- CDPs and cables, for example. (assuming level-matching
for output devices, of course). " Clearly alleged typical meausred
performance" doesn't tell us jack about any given product's actual
sound.


True for active devices like CDPs, false for passive conductors like
Interconnects and cables. There is simply NO way a properly made cable or
interconnect can have a "sound". If it does its because the manufacturer
PURPOSELY added components to those cables to alter their frequency
response
and that sound is subtracting fidelity from the music being played, not
adding fidelity to it. I.E. If a cable or interconnect changes the sound
of
one's system, it is NOT in a good way. At any rate who wants to spend
hundreds of dollars for a set of "fixed" tone controls?


I have heard a obvious difference with interconnects. An audiophile friend
switched from one to the other.
One made, a not so good mono recording (Byrd's), duller the other sparkled
by comparison.
Not a scientific test but I believe that the differences were great enought
to be measured.


I didn't say that it wasn't possible. What I am saying is that if one cable
sounds different from another cable its because one of the manufacturers of
those cables (and possibly both of them) have purposely designed their cables
to either lift the top-end frequency response (giving you that "sparkle", a
sparkle, that I might add was probably NOT on the original recording) or the
cable he was using before was designed to roll-off the top end, making it
sound duller. OTOH, without resorting to some form of frequency contouring,
just plain interconnects and speaker cables (of sufficient wire size for the
application and run) simply cannot do that. Cables are merely conductors,
They transmit signals, unaltered from one component to another. If they DO
alter the signal being fed through them, then they're NOT conductors, they
are filters. Cables aren't supposed to be filters and without external
components such as resistors, capacitors and inductors incorporated into them
in some way, they simply CANNOT be filters. Not at audio frequencies, anyway.
There is no way that coaxial cables or speaker cables from different
manufacturers can be different enough to affect the sound of a system unless
it was on purpose. And if it is on purpose and the manufacturer didn't tell
the buyer up front that the cable in question has frequency response altering
properties, then that manufacturer is being dishonest. Bottom line is that
you shouldn't WANT cables that act as filters because you don't have any way
of knowing HOW they're going to react in your system until you buy them and
takes them home and try them and whatever effect they have on your sound is
permanent, you cannot defeat it save replacing the cable. You want to dull
your system or peak some portion of the frequency response to compensate for
poor room acoustics or deficient speakers or bad sounding recordings, use an
equalizer. At least you can change the characteristics of those (or take them
out of the chain altogether with the bypass switch). Doing it with snake oil
cables is just stupid and you get to pay a lot of money for the privilege to
boot.

  #139   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Scott" wrote in message


So perhaps you could tell us what sort of defects in the
burner at tha plant would lead a CD to sound thinner on
certain CDPs and not others?


A lot of bad samples requiring significant defect hiding (not correction) by
the player.

Of course to me the big
question is how does this "defect" go undetected at a
major CD producing plant and how many "defective" CDs
have entered the market place due to this one type of
defect thatthe plant missed when they knew the quality of
their product was being scrutinized?


This sort of thing has happened many times due to what many of us would call
carelessness.

  #140   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default You Tell 'Em, Arnie!

On Jul 15, 5:29*am, ScottW2 wrote:

*While I agree that fast switching is the best way to quickly
determine if conscious audible differences in sound are detectable,
I don't oppose someone who wants to use extended listening
segments to gain some level of intimacy with the presentation of
the music and their emotional response to it as compared to quick
switching which allows sound sample comparison.
I would consider allowing time to compare the
emotional response generated by the music in normal listening
(I'm referring to a study in Japan on the presence of ultrasonics
in music while monitoring brain activityhttp://jn.physiology.org/cgi/content/abstract/83/6/3548*)
for which there is some indication that cues not consciously audible
in sound samples comparison may play a role.


More recent research has found that the effects supposedly identified
in that study are in fact audible in conventional DBTs.

As Arny and others keep saying, there's nothing to stop you from
conducting a DBT over a longer period, and listening to music for as
long as you like before identifying X (or stating a preference, or
however you want to conduct the DBT). But everything we know about
hearing says you'll have a harder time hearing differences if you do
it that way.

bob


  #141   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default You Tell 'Em, Arnie!

"Arny Krueger" wrote in message
...
"Harry Lavo" wrote in message


However, when you are asked to participate in an ABX test
evaluating short snippets of music, you don't have a clue
intitially(sic) as to what you are listening for nor a normal
framework for listening.


This is an outrageous and absolutely ludicrous and insulting statement.

I guess we can discern from this statement that the writer believes that
he
has audited *every* ABX test that has ever done in the past 30 years so
that
he can make such a totally global statement.

This appears to be a false claim that in every ABX test ever done, someone
made sure that whatever devious steps were necessary to *hide* from the
listener what they were supposed to listening for, were taken. This claim
taxes reason and any idea of normal human behavior.

Of course a test coordinator will tell the listener what to listen for.
Why
wouldn't he?

Of course, there have been a few occasions where a blind test was
contrived
where people weren't told what to listen for. Even this makes some sense
when the listener proudly announces that he can easily hear the effect
that
is involved, which is often the case.

At the old ABX web site I provided samples of music where the artifact to
be
listened for was blatantly obvious, and then provided it in more subtle
forms for the purpose of the actual test.

When working with people in person, it has always been the practice to
talk
about the artifact to be listened for, and try to demonstrate it in a
sighted evaluation, before commencing with the DBT.


Nonetheless, if the samples are of music that the listener is not familiar
with, and the tools/systems used in the test likewise are not familiar to
the listener, the listener is simply being ask to "listen to this music
being played by A, B, and X, and then tell me whether A or B is identical to
X". I'm talking here of a formal test situation using music as the source,
and equipment as the variable under test.

Using ABX in the home might be better, but only if using a comparator and
one's own equipment. Listening to a recordin using computerized snippets
with no established frame of reference for what aspect of the music one
should be listening to, is simply a non-starter.


  #142   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default You Tell 'Em, Arnie!

On Jul 15, 11:31*am, "Harry Lavo" wrote:

If you are doing an abx test, and have a bias ("read belief") that the thing
under test cannot or will not be audible, the human mind is perfectly
capable of making sure your answers are random. *In a worse case scenario,
you can even consciously chose to simply randomize your "different" "no
different" choices, thus assuring the the ABX test returns a "no difference"
result. *This is the one bias / potential fraud that ABX simply has no
controls for.


Of course you can control for this: Use test subjects who think they
hear a difference. Problem solved.

Unless your problem is that you don't like the results.

bob
  #143   Report Post  
Posted to rec.audio.high-end
Sonnova Sonnova is offline
external usenet poster
 
Posts: 1,337
Default You Tell 'Em, Arnie!

On Wed, 15 Jul 2009 12:00:14 -0700, wrote
(in article ):

On Jul 7, 10:40*pm, Sonnova wrote:

A lot of people get deluded this way, so you're not alone. But believe me if
you were to switch between your old cables and the new ones in a
double-blind
evaluation, you would not be able to tell one cable from the other.


Don't leap to conclusions. It is entirely possible that his previous
cables have deteriorated physically such that there is some actual
interference with the sound. Especially with cheap (not necessarily
inexpensive) shielded cables the core wire can become corroded at the
connector(s). I had a set once that had enough copper-salts that it
rectified CB noise on occasion from passing trucks. Wild!


That's not exactly what I meant. I think that you understand that.

I will replace my cables every so often and clean the jacks for this
reason - but, again, with inexpensive but well-made cables, not
boutique stuff. Every-so-often is mostly based on the insulation
getting stiff, so 15+ years anyway, at least.


Absolutely. It pays to buy decently built cables as well. And don't forget to
use Stabilant 22A (Tweek) on all your connections. This stuff is available
through Quest Auto stores in the USA under the part number SL-5, From NAPA
car parts as CE1, and from Big A Car parts as 40-6. It is also available from
Motorola's Test Equipment and Shop Supplies Catalogue as Part Number
11-80369E78. Order it by calling:

1-800-422-4210

All of these sources sell Stabilant 22 in the 15 mL bottles which is enough
to last anyone at least 10 years (Tweek was sold by Dayton-Wright back when
in 15cc bottles for about $20 a bottle). Be prepared to pay about $60 for 10X
the amount as one got from Dayton-Wright when stereo shops sold the stuff as
Tweek.

What I do is use a good connector cleaner like Deoxit to clean both mating
surfaces from phono cartridge terminals and leads all the way to the speaker
connections. Then I apply Stabillant 22A to the mating surfaces. I do this
every time I break or make a connection. Clean airtight contacts will insure
a good, reliable connection. I'm not saying that this procedure improves the
sound. With fresh connections it won't and can't. But it will keep the
connection from deteriorating over time due to corrosion and air-born dirt
such as home cooking grease and smoke and will clean off any corrosion and
dirt present when applied to previously untreated connections.

Lest you think that Stabillant 22A is "mousemilk" the same way that
$4000/meter interconnects are a fraud, let me enlighten you. Stabillant 22A
has a Mil-SPEC number, a US Air Force number and a NASA part number. It also
has GM, Ford, and Chrysler part numbers. It is used in every missile, and
space craft and satellite launched. It is also used in the assembly of
practically every automobile made. Its advantages as a contact enhancer are
REAL.

NOTE: This is a physical/electrical decay issue - NOT an oxygen-free
copper rolled on the thighs of virgins on Walpurges Night issue.


Oh. I agree.

Peter Wieck
Melrose Park, PA


  #144   Report Post  
Posted to rec.audio.high-end
Steven Sullivan Steven Sullivan is offline
external usenet poster
 
Posts: 1,268
Default You Tell 'Em, Arnie!

Scott wrote:
On Jul 14, 5:45?pm, Steven Sullivan wrote:
On Tue, Jul 14, 2009 at 12:24:02AM +0000, Steven Sullivan wrote:
?Scott wrote:
?On Jul 13, 4:15??am, Steven Sullivan wrote:
? Scott wrote:


? ? Nope. Straw man. ?And you should know better.
? ?here are your words from this thread. "Audiophiles routinely claim
? ?audible difference among classes of devices whose typical measured
? ?performance does not predict audible difference -- CDPs and cables,
? ?for example. (assuming level-matching for output devices, of course)."
? ?You might want to check thse things before crying strawman. (note for
? ?moderator: I am leaving all quotes in tact for sake of showing that
? ?these were Steve's words in context)


? What does the word 'typical' mean to you, Scott? Does it mean 'all'?


Why do you ask? Did I say anything about "all" measurements?


Please. You attributed to me the idea that 'all CD players sound the same',
when I wroe 'typically CDPs should sound the same (assuming conrolled conditions)".
There is a significant differnece between those two claims.

Are we clear now?


You do
realize that you used the word 'typical" in reference to the
measureements and I used the word "standard" in it's place. So the
meaningful question would are "typical" and "standard" interchangable
in this particular case. I say yes.


"Standard' does not mean 'all' either.



What does the word
?"standard"
? mean to you Steve? Is it something radically different than typical?


?No, but they're both indubitably different from 'all'...which is what
?you claim I'm saying.


Steve reread the quote. Reread your own words. Your use of the word
"typical" was in reference to measurements not the CDPs. I used the
word "standard" instead of "typical." I think that was an acceptable
change.



It's *YOU* who presumed that I meant 'all', Scott. YOU WROTE THIS, NOT I:

"You seem to have been claiming that standard meausurements predict
that all CDPs sound the same. "

http://groups.google.com/group/rec.a...n&dmode=source

I have *never* asserted or implied or insinuated that all CDPs sound the same.

?(Btw, 'routinely' doesn't mean 'always', either.)


"Routinely" was used in reference to audiophile behavior (claims of
audible differences) not the sameness in sound of CDPs. You really
need to keep track of what words you used to describe what phenomenon.


And here's what I was replying to:

?So what are you saying now Steve that you were not suggesting that
?audiophiles were and always have been wrong in their reports about
?audible differences between CDPs?


I have NEVER claimed or 'suggested' something as absurdly totalizing as that
about audiophiles and audio CDP differences.

So as regards 'keeping track of words used, * right back atcha, Scott* .

In both instances it is YOU who were wrongly attributing ridiculously
sweeping claims to me.

Would it be a forlorn hope that you'll admit error here, and
I dunno, even *apologize*?



--
-S
We have it in our power to begin the world over again - Thomas Paine

  #145   Report Post  
Posted to rec.audio.high-end
bob bob is offline
external usenet poster
 
Posts: 670
Default You Tell 'Em, Arnie!

On Jul 16, 5:05*am, ScottW2 wrote:
On Jul 15, 4:38*pm, bob wrote:





On Jul 15, 5:29*am, ScottW2 wrote:


*While I agree that fast switching is the best way to quickly
determine if conscious audible differences in sound are detectable,
I don't oppose someone who wants to use extended listening
segments to gain some level of intimacy with the presentation of
the music and their emotional response to it as compared to quick
switching which allows sound sample comparison.
I would consider allowing time to compare the
emotional response generated by the music in normal listening
(I'm referring to a study in Japan on the presence of ultrasonics
in music while monitoring brain activityhttp://jn.physiology.org/cgi/content/abstract/83/6/3548*)
for which there is some indication that cues not consciously audible
in sound samples comparison may play a role.


More recent research has found that the effects supposedly identified
in that study are in fact audible in conventional DBTs.


22 kHz and above is now considered audible? *Please provide a
reference.

ScottW


I didn't say 22 kHz was audible. I said that researchers have
subsequently been able to get positive DBTs in cases where the
original Japanese researchers said they could not. Which suggests
strongly that there were in fact differences in the audible range,
quite possibly artifacts of IM distortion. Here's the paper:

http://www.aes.org/e-lib/browse.cfm?elib=10005

bob


  #146   Report Post  
Posted to rec.audio.high-end
mcdonaldREMOVE TO ACTUALLY REACH [email protected] mcdonaldREMOVE TO ACTUALLY REACH ME@scs.uiuc.edu is offline
external usenet poster
 
Posts: 42
Default You Tell 'Em, Arnie!

ScottW2 wrote:


22 kHz and above is now considered audible?


Absolutely, for the young. I unequivocally could hear to 23 kHz when
I was in college.

Now whether I could hear the difference between recorded classical
music, recorded flat to say 30 kHz, and then brick wall filtered
to say 19, 21, and 23 kHz, is a different matter. I cannot now
say whether I could have heard the difference. I suspect not.

I can now hear only to 14.5 kHz (at about the same level I could then
hear to 23 kHz. I have tried comparing brick wall filters at
12, 13, and 14 kHz, and cannot tell the difference
between 13 and 14 kHz. I can tell 12 and 13 for a few notes
of the violin, flute, and piccolo.

Doug McDonald
  #147   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default You Tell 'Em, Arnie!

"mcdonaldREMOVE TO ACTUALLY REACH wrote in message
...
ScottW2 wrote:


22 kHz and above is now considered audible?


Absolutely, for the young. I unequivocally could hear to 23 kHz when
I was in college.

Now whether I could hear the difference between recorded classical
music, recorded flat to say 30 kHz, and then brick wall filtered
to say 19, 21, and 23 kHz, is a different matter. I cannot now
say whether I could have heard the difference. I suspect not.

I can now hear only to 14.5 kHz (at about the same level I could then
hear to 23 kHz. I have tried comparing brick wall filters at
12, 13, and 14 kHz, and cannot tell the difference
between 13 and 14 kHz. I can tell 12 and 13 for a few notes
of the violin, flute, and piccolo.

Doug McDonald


Well, that figures! When CD first came out, we were all (or at least those
of us who were born then) a quarter-decade younger than we are now. And the
kids today, raised on MP3's, wouldn't know a brick wall if it fell on them.


  #148   Report Post  
Posted to rec.audio.high-end
Harry Lavo Harry Lavo is offline
external usenet poster
 
Posts: 735
Default You Tell 'Em, Arnie!

"bob" wrote in message
...
On Jul 16, 5:05 am, ScottW2 wrote:
On Jul 15, 4:38 pm, bob wrote:





On Jul 15, 5:29 am, ScottW2 wrote:


While I agree that fast switching is the best way to quickly
determine if conscious audible differences in sound are detectable,
I don't oppose someone who wants to use extended listening
segments to gain some level of intimacy with the presentation of
the music and their emotional response to it as compared to quick
switching which allows sound sample comparison.
I would consider allowing time to compare the
emotional response generated by the music in normal listening
(I'm referring to a study in Japan on the presence of ultrasonics
in music while monitoring brain
activityhttp://jn.physiology.org/cgi/content/abstract/83/6/3548 )
for which there is some indication that cues not consciously audible
in sound samples comparison may play a role.


More recent research has found that the effects supposedly identified
in that study are in fact audible in conventional DBTs.


22 kHz and above is now considered audible? Please provide a
reference.

ScottW


I didn't say 22 kHz was audible. I said that researchers have
subsequently been able to get positive DBTs in cases where the
original Japanese researchers said they could not. Which suggests
strongly that there were in fact differences in the audible range,
quite possibly artifacts of IM distortion. Here's the paper:

http://www.aes.org/e-lib/browse.cfm?elib=10005


I haven't read the paper as of yet, but I assume you are referring to the
Oohashi experiments. Whether that is whom the authors were referring to or
not I do not know, but I do know that Oohashi and his group used SEPERATE
speakers for the ultrasonic portion of the stimula.....so from the beginning
I doubted that the intermodulation theory stood up as to why/how they
detected the ultrasonics. This test seems (at least from the summary) to
indicate that Oohashi's design approach was correct and conclusions
appropriate....that something was at work in his test other than
intermodulation.

I can't tell from the precises what source material the new study
used.....you will recall that Oohashi used gammalon music specifically
because it was rich in ultrasonic content, and he also used especially
broadband equipment to make sure that response was flat out to 100khz. I
would hope the follow-up research was done as diligently.

  #149   Report Post  
Posted to rec.audio.high-end
mcdonaldREMOVE TO ACTUALLY REACH [email protected] mcdonaldREMOVE TO ACTUALLY REACH ME@scs.uiuc.edu is offline
external usenet poster
 
Posts: 42
Default You Tell 'Em, Arnie!

ScottW2 wrote:
On Jul 16, 4:00 pm, "mcdonaldREMOVE TO ACTUALLY REACH
wrote:
ScottW2 wrote:

22 kHz and above is now considered audible?

Absolutely, for the young. I unequivocally could hear to 23 kHz when
I was in college.


I had a housemate in college who made a similar claim. He checked
out a signal generator from the lab to bring home and prove it to me.


The speakers I used were the excellent AR3a.

Doug McDonald
  #150   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

On Jul 14, 4:43*pm, ScottW2 wrote:
On Jul 14, 8:13*am, Scott wrote:





On Jul 14, 3:05*am, ScottW2 wrote:


On Jul 13, 5:24*pm, Scott wrote:


On Jul 13, 4:20 pm, ScottW2 wrote:


On Jul 13, 4:16 am, Scott wrote:


In an exchange of emails Dennis told me that this particular sonic
defect was CDP dependent. It was in those emails that he gave details
of level matching, time synching and DB protocols.


*This sounds like a test of CDPs ability to handle defective CDs with
high read error rates.


Again with the defects assertion. What was defective?


One of the laser burners. *You said that 2 of 3 systems worked fine.


No that isn't what I said at all. In fact I said nothing on the
matter. but this is what Dennis Drake said.
""Upon further investigation, it turned out that the plant had three
different laser
beam recorders and that one of them sounded different than the other
two."


*Which is a clear sign of a defect.


It is a clear sign of a difference.


All three were *different* but none of them were ever said to be
"defective."


*Allow me to inform you, one of the systems was defective.


To use your words, that is pure unsubstantiated speculation.


In fact for all we know thouasnds of titles were cut on
the one burner that produced colored sounding CDs on certain players.
Lesser in quality does nor equate defective.


*Not always, but in this case I think it fair to conclude the unit is
most probably, defective. Either in design or construction. Which
doesn't
really matter.


actually IMO it matters quite a bit. If these things were "defective"
by faults in the design and were being used in the field in good faith
by manufacturers of commercial CDs that would mean there could be a
good many such CDs out there. Seems to me this would matter a good
deal to those looking for better sound.






*For all we know that
burner was opperating exactly up to it's full capacity and was
considered at that time "by typical measurements" to be working
propperly.


* Pure unsubstantiated speculation. In fact, what little evidence
there is
in this story, points to the contrary.



I don't see it pointing one way or the other.





That is they produced CDs that sounded the same on all CDPs.


We don't even know that. They sounded the same on all the CDPs that
Dennis Drake used for his later comparisons.


Good enough in this case. *Two burners worked with all the CDPs
available
and one did not. *That one was most probably defective.


But they all worked with other CDPs. Why blame the burner only?
wouldn't it be fair to say the CDPs that could play the inferior CD
test pressing is better than the ones that failed to play it properly?



*I'm pretty sure Dennis
did not test every make and model of CDP past and present to that day.
Nor do I suspect that he even tested a substantial sample.


The 3rd produced CDs that sounded fine on some players and not so fine
on others. *That tells me that unit was defective in that it produced
marginal
CDs that would not play without audible degradation on some CDPs.


In the same report he talks about th colorations of all but one A?D
converter. Does that mean that all those other widely used A.D
converters were also "defective?"


*Different issues. *CD burners should be capable of burning
and reading bit perfect. *A simple thing to test.
No measurable degradation.



Here is the kicker. All three CD test pressings were bit perfect.


But I would say that DACs should also be capable of producing
bit identical output from the same analog source if properly sync'd.
A more difficult test which given the scarcity of detail will most
likely
remain unknown.


And yet they failed to produce the same sound in Dennis Drake's
tests.




*Are you suggesting that all this CD
are either universlly transparent or defective?


*Bit copying should transparent.



But in this case bit perfect CD test pressings apparently were not
equally well read by various CDPs. So we have a case of bit perfect
being less than transparent.




D/A and then A/D can be transparent as well today if done right.
Drakes finding of 15 years ago aren't relevant to current technology
IMO.


Which would matter I suppose if one were to exclude all CDs produced
prior to the advent of the "current technology." I do buy CDs second
hand that were made earlier than that. Don't get me wrong. i'm not
sitting around worrying that my older CDs are somehow less than I
thought they were before. It's just unfortunate if many of those CDs
are suffering from this sort of degredation.







How were the CDs
defective? An error in the pressing? How does that happen? How does
this play "better" on one player and not another?


Let's see...it could be all optics are not created equal, or all error
correction
is not created equal.


Inequities are no surprise. That's what the crazy subjectivists have
been claiming from the get go.


*But this isn't condemnation of a format. It's critique of execution
using
most probably substandard discs with questionable burn quality.


Correct, it is not condemnation of the format. No one I know on this
thread is looking to condemn the format.


Bit perfect digital reproduction is routine today.


apparently the problem extended beyond that.



*It is also something that some people
claim have never been a concern in CD playback. inequities are not
always divided by defective and nondefective.


*I think you're statement is lacking appropriate context and the
strawman
of this argument is obvious. *The fact that someone at sometime made a
poor performing CDP is really not a worthy condemnation of the
technology.



I think ther in lies the strawman. No one here is trying to condemn
the format. all I have done is point out that both in CD production
and in CDPs we can get audible performance that is less than
"perfect." that is hardly condemnation.


I've got an AMC CD-9 that can't read CDRs just a few years old while
my DVD reads them fine. It also has about 2 db of channel imbalance.
IMO, it's defective. * I don't condemn all CDPs on the performance of
that
one unit.


Given that no one else is doing so either I fail to see your point.


*I wouldn't even condemn all AMC CD-9s on that basis.


Just yours?





Dennis indicated
that on some CDPs the so called 'defective" discs played perfectly.


Those players optics could handle deficient CDs or they had better
error correction.


IOW they were better sounding CDPs with certian CDs.


*With certain substandard CDs.



Of course. That is the point. And who knows which CDs are
"substandard" in this way.



And who knows how
many of those discs were released into the commercial market?


*Not nearly as many as some substandard vinyl pressings by labels
like Atlantic.


Aside from the irrelevance of that assertion I think it is an apples
and oranges comparison.



Do we
have any reason to think that Dennis Drake's rigor in persuit of sound
quality was the norm in commerical CD production? I'll bet it was and
is very much the exception.


*A norm is the exception? * Please clarify.


I asked the question if that level of rigor was the norm and then
offered the opinion that it was and still is an exception.



How can a defective disc ever play perfectly?


You've never heard a scratched CD play perfectly?


Have you ever heard a scratched CD sound thin as opposed to just
skipping or stopping? Not the same thing here. Dennis described
inferior colored sound not skips or stops.


*You asked how can a defective disc play perfectly.
Now you want to know how a disc can sound thin.



The so called defective CD test pressing sounded thin to Dennis Drake.
It didn't skip or fail to play. that is what I hear when a scratch
casues an audible defect in the playback of a CD. you brought up
scratched CDs not me.



*(I'm not really sure
what "thin" sounding is, but) *I would speculate that if he took some
digital samples of the output of the players with bad discs he might
find lots
of errors that the player was attempting to correct.


Interestingly enough he did make CDR copies of the "defective" disc.
They were bit perfect and the thin sound was fixed in the CDR when
played back on the same CDPs that sounded inferior with the CD test
pressing

..
Not enough that it couldn't track. * If the players output was bit
correct,
then I have no explanation for the different sound with some discs.


Nor do I. but the CD test pressing had all the bits.





*IME, CDs have to be
rather badly damaged to not play perfectly on a decent player.


IYE what sort of damage leads to the sound that Dennis Drake observed?


The sort that can only come from a bad laser burner.



Oh you have experienced this before? why didn't you say so?




  #151   Report Post  
Posted to rec.audio.high-end
[email protected] khughes@nospam.net is offline
external usenet poster
 
Posts: 38
Default You Tell 'Em, Arnie!

Harry Lavo wrote:
wrote in message
...
Harry Lavo wrote:
wrote in message
...
Harry Lavo wrote:
wrote in message

snip

are in error...slightly in favor of a "null" result.

Again, I'm not questioning population size. But you can sample a
thousand people and the resulting statistics are worthless if the test
is insensitive to the parameter of interest.


And I'm talking about perceiving differences in audio reproduction equipment
when reproducing music, as evaluated using ABX. I am DIRECTLY measuring
real differences in the base sample....differences perceived statistically
as different between the variable under test and its control, while
reproducing music. How much more "on paramenter" can you get? It is "on
paramenter", it is just not measured directly (a good thing....see below).


Harry, as I posited in the test case, the parameter of interest was
stipulated as having been detected in ABX, but of insufficient magnitude
as to adversely impact "musicality" or however you want to define it
relative to preference testing. Thus if you're evaluating "musicality",
the sample size is totally irrelevant because the artifact in question
doesn't impact "musicality".

snip

The variable under test has already been shown to be detectable in ABX -
that was stipulated in the test case. What that result shows is that
your preference test was insensitive to the difference that ABX testing
identified.


Again, you show a lack of understanding of what I proposed.


No, I'm pointing out that do not appear to be using any accepted
definition of "validation". See below.

The first step
is to find an equipment variable that DOES expose a difference in monadic
appreciation....


You mean just like the speaker example given below?

THEN undertak ABX testing to see if it delivers the same
result. Not the othe way around. Your failure to understand the difference
is one of the reasons I made the comment above.


I understand the difference Harry. YOU are proposing, basically, a
fishing expedition to try and find one example where monadic testing
identifies a preference difference, and ABX doesn't "deliver the same
result". First, ABX *can't* deliver the "same" result, so how about
lets start with using accurate descriptive verbiage, and say where ABX
tests do not detect a "difference"?

But your test fails the basic precepts of validation. That's the point
of the test case example. You've already admitted that ABX can
distinguish low level artifacts (at a level low enough to require
specific participant training in the artifact in question). So, you
have to accept that either *any* artifact that can be distinguished by
ABX, at a statistically significant level, will be distinguishable in a
monadic preference test, OR you have to accept that ABX has, at least at
the JND side the spectrum, greater range and/or precision than monadic
testing.

Thus, in the first case, you have no problem with "finding some
variable", you pick any artifact identified by ABX, the lower the
detectability the better as far being an effective challenge is
concerned, and perform your monadic test.

If you accept the latter case, then you have to conclude that you cannot
use monadic testing to validate ABX because the range of the "reference"
is insufficient to validate ABX at the lower end of its detectability
range. That being the case, in order to even use monadic testing in a
"comparability" study, you run into the problem of identifying the
boundary conditions for detectability of monadic testing.
Methods/procedures/processes are valid only within their boundary
conditions, none have infinite range or precision, and you cannot
*validate* unless you know those boundaries, or unless you stipulate
boundaries and validate within them.

snip

I just spoke above of the criteria, as I have in the past. I am looking to
find a difference on a variable that "objectivists" believe not to exist.
Only once and if we find it can it then serve as a basis for the validation.


Exactly. So let's do thousands of tests on all manner of differences
that physics, engineering, and psychoacoutics say should not be audible,
then ABX all of them in the hopes that one of them cannot be detected?
That's the problem Harry, that has *nothing* to do with validation of
ABX as a method, and everything to do with trying to *invalidate* the
method. As with trying to "prove" a negative, this approach is almost
doomed to be an endless loop.

Based on what I've seen ABX has been validated, at least to an extent,
for its intended use. It has been challenged for detection of known low
level artifacts, artifacts that can be objectively measured with great
precision. Methods of presentation, and participant training have been
challenged, and optimized at least to an extent. ABX has not been
validated, nor was it intended, for detection of *preference* at any
level.

OTOH, your monadic test has not been validated, to any extent, for
detection of low level audible differences. It has been validated for
detection of relatively gross organoleptic differences that affect
preference (if you have data otherwise, please share it). Furthermore,
to my knowledge, it has not routinely (if ever) been employed to
*detect* differences, but only to quantify the effects of known
differences on preference. In such cases, the data obtained do not lend
themselves to evaluation of the method relative to threshold response
(relative to difference detection, not preference *for* a particular
difference) or precision of detection.

snip
....how many people succeed, how many fail, how obvious does the
difference seem to show up. Only if ABX failed to detect the differences
in
any appreciable way would ABX be judged a failure.

Difference, singular. A and B are distinguishable or they are not. It's
a binary result. I don't know how "...in any appreciable way..." could
even apply to ABX.


The outlier argument.


No, nothing to do with outliers. The term "...in any appreciable
way..." is merely misapplied to ABX. Different/Difference not
demonstrated. Those are the only "ways" ABX results can take.

If thirty people do the test, and one or two succeed
but others do not, is it significant or not.


Why are using such terminology as "succeed"? In a difference test, if
one or two of thirty get response A, while the rest get response B, of
course it's significant - based the 28 or 29.

Or if no one person's choices
prove significant, but the overall sample when lumped together does. Small
sample difference testing is not as simple as it is often made out to be.


Especially when you fail to limit the test variables...

Because in your "test case" you've got it bass-ackward, as I've already
pointed out.


Yes, to what *You* propose to do. It does, however, have everything to
do with actual validation of the method.


.....especially as you keep insisting that the monadic test is less
sensitive than ABX.

For what it's designed for yes. And that is NOT for evaluating
preference.


It is less sensitive for the purpose it is designed for? Can you restate or
explain what you mean, please?


The *it* in "it's designed for" is ABX. The monadic test is less
sensitive for detection of low level differences than is ABX, a test
designed for such detection, and not for evaluating preferences.

snip

Let's focus on differences that "do"
affect perception of the musical reproduction, although very subtly.


How subtly? That's the point you refuse to address Harry. It's an
endlessly iterative process as you've defined the terms. Monadic test
finds difference, ABX confirms, next iteration. The process only ends
*IF* you find some example where ABX has "failed". Pray tell how this is
not so? Give me 30 minutes, and I can generate enough test scenarios to
cost many tens of millions of dollars, and take decades to do, at the
end of which, if no ABX "failures" are observed, we would be exactly
where we are today.

But let's say after 25 tests that ABX "passes", you then find one
"failure". You propose to "validate" ABX using that result? But that
result would be, in fact, an outlier that would make the monadic test
suspect for sampling or methodological errors. You would have no
statistical basis for accepting the monadic result as representative,
much less definitive. And naturally, the more "passing" iterations you
have, the less significant that first "failure" will be.

No one does validation that way (well, in thirty years of doing
validation studies, for dozens of companies, and thousands of tests,
I've never seen it done that way). No one can afford to do validation
that way. You define the boundaries, and validate within them. Or, you
take a case that does not appear to comport with a validated methods'
response, and you challenge the method. The latter is what you seem to
want to do, but sans *observed* data that, on careful examination (yes
the usual methodological constraints), contravenes extant ABX results.

You don't need to "perform a test" such as you propose, you just find
one result, for one response, from one audiophile, obtained with a
modicum of test rigor, that contravenes ABX results, or expectations
based on accepted electrical/physical/psychoacoustic principles, then
ABX that. Lacking that data, you have no objective reason to assume ABX
is suspect (yes, there are numerous anecdotes, but the easiest approach
is to reproduce those anecdotal results under controlled conditions -
not plunge into a massive monadic testing regimen).

Keith Hughes
  #152   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"ScottW2" wrote in message
...
On Jul 15, 6:40 am, "Arny Krueger" wrote:
"ScottW2" wrote in message



But I would say that DACs should also be capable of
producing bit identical output from the same analog source if
properly sync'd. A more difficult test which given the
scarcity of detail will most likely remain unknown.


I meant to say ADC.

It seems to me that an ADC/DAC pair that is capable of a bit or two more
resolution than the recording used for the test should be able to pass it
with utter perfection.

IOW, an ADC/DAC pair with 108 or more DB dynamic range should be able to
pass a 16 bit recording extended to 24 bits by adding zeros.


How about two ADCs recording from a common analogue source?


What about it?

The usual result is that the output of the two ADCs are indistingushable
from each other, and indistinguishable from the analog source when converted
back to analog using a good but not uncommon DAC.

  #153   Report Post  
Posted to rec.audio.high-end
[email protected] tim@skene.org is offline
external usenet poster
 
Posts: 1
Default You Tell 'Em, Arnie!

On Jul 7, 12:18*am, Sonnova wrote:
On Mon, 6 Jul 2009 03:43:11 -0700, Rockinghorse Winner wrote
(in article ):

Thanks for the straight forward analyses and debunking the huge mass of
bull**** in high-end. The rag I subscribe to is one ecstatic review after
another, thus rendering any basis of comparison virtually nil. Double blind
testing is the only way to go, IMO, but who's going to fund it? The audio
press? Not if the sales dept has anything to say about it. The mfg's? What
are you smoking? Anyway, it is amusing reading the reviews of speaker
cables: completely opened up the soundstage and revealed levels of detail
I'd never heard before. Oh, really, you don't say? LOL!


*R* *H*


Yeah, I don't understand why these rags still foster the cable "myth". It
should be common knowledge by now that cables and interconnects all sound the
same. Yet I just read an article that suggested that USB cables (used in
computer audio playback) have a "sound" and all are different! *It's bad
enough that these rags perpetuate the myth that cables carrying analog audio
can have some effect on the sound, but USB cables carrying ones and zeros?
Gimme a break!


No, you're wrong, it explains why right he

http://www.wireworldcable.com/catego...sb_cables.html

Maybe if I used one with my color printer, I would get brighter colors
and better detail...

Tim


  #154   Report Post  
Posted to rec.audio.high-end
Arny Krueger Arny Krueger is offline
external usenet poster
 
Posts: 17,262
Default You Tell 'Em, Arnie!

"Harry Lavo" wrote in message
...
"Arny Krueger" wrote in message
...


When working with people in person, it has always been the practice to
talk about the artifact to be listened for, and try to demonstrate it in
a
sighted evaluation, before commencing with the DBT.


Nonetheless, if the samples are of music that the listener is not familiar
with, and the tools/systems used in the test likewise are not familiar to
the listener, the listener is simply being ask to "listen to this music
being played by A, B, and X, and then tell me whether A or B is identical
to
X". I'm talking here of a formal test situation using music as the
source,
and equipment as the variable under test.


A formal ABX test *must* involve listener training. Therefore the listener
would be familiar with all aspects of the test before starting the
listening that would scored.


  #155   Report Post  
Posted to rec.audio.high-end
JWV Miller JWV Miller is offline
external usenet poster
 
Posts: 12
Default You Tell 'Em, Arnie!

On Jul 18, 6:40*pm, wrote:
snip

same. Yet I just read an article that suggested that USB cables (used in
computer audio playback) have a "sound" and all are different! *It's bad
enough that these rags perpetuate the myth that cables carrying analog audio
can have some effect on the sound, but USB cables carrying ones and zeros?
Gimme a break!


No, you're wrong, it explains why right he

http://www.wireworldcable.com/catego...sb_cables.html


Sorry but the claims in the link make absolutely no sense. They are
simply advertising blather to sell snake-oil products. The audio
information is not simply streamed directly from the USB interface to
the DAC. The serial data must be converted into 16 bit (or larger)
words and buffered prior to conversion so jitter would have no affect
on the resulting audio signal.




  #156   Report Post  
Posted to rec.audio.high-end
Scott[_6_] Scott[_6_] is offline
external usenet poster
 
Posts: 642
Default You Tell 'Em, Arnie!

[Moderator's note: This subthread has become a circular discussion
between 2 people and of little interest to the group, so it is ended.
-deb ]

On Jul 18, 11:47*am, ScottW2 wrote:
On Jul 18, 3:04*am, Scott wrote:





On Jul 14, 4:43*pm, ScottW2 wrote:


On Jul 14, 8:13*am, Scott wrote:


On Jul 14, 3:05*am, ScottW2 wrote:


On Jul 13, 5:24*pm, Scott wrote:


On Jul 13, 4:20 pm, ScottW2 wrote:


On Jul 13, 4:16 am, Scott wrote:


In an exchange of emails Dennis told me that this particular sonic
defect was CDP dependent. It was in those emails that he gave details
of level matching, time synching and DB protocols.


*This sounds like a test of CDPs ability to handle defective CDs with
high read error rates.


Again with the defects assertion. What was defective?


One of the laser burners. *You said that 2 of 3 systems worked fine.


No that isn't what I said at all. In fact I said nothing on the
matter. but this is what Dennis Drake said.
""Upon further investigation, it turned out that the plant had three
different laser
beam recorders and that one of them sounded different than the other
two."


*Which is a clear sign of a defect.


It is a clear sign of a difference.


*Defects are different too.


which is one subset of the larger set of possibilities. In the absence
of more information I am not willing to take that leap of faith with
you that it is only the one subset of possibilities in play here.




All three were *different* but none of them were ever said to be
"defective."


*Allow me to inform you, one of the systems was defective.


To use your words, that is pure unsubstantiated speculation.


based upon the preponderance of the evidence.


but there is no other evidence to preponder so you are excluding real
possibilities based on nothing.








In fact for all we know thouasnds of titles were cut on
the one burner that produced colored sounding CDs on certain players.
Lesser in quality does nor equate defective.


*Not always, but in this case I think it fair to conclude the unit is
most probably, defective. Either in design or construction. Which
doesn't
really matter.


actually IMO it matters quite a bit. If these things were "defective"
by faults in the design and were being used in the field in good faith
by manufacturers of commercial CDs that would mean there could be a
good many such CDs out there. Seems to me this would matter a good
deal to those looking for better sound.


*There are lots of crappy CDs and vinyl pressing around.
*This problem may only be apparent on some players.
*Wether it's a player deficiency or a burner deficiency really
doesn't matter when discussing the format or the technology.


As someone who is interested in the persuit of better sound i
disagree. These things do matter when discussing the format with the
idea in mind of actually getting better sound in practice.


Deficient units or designs is no reflection on the technology
IMO.


No they are simply a real world hazzard that we as audiophiles face in
our persuit of better sound. seems to me that such hazzards are worth
knowing about,

* We don't condemn vinyl based on the performance
of ceramic carts with steel stylii.


Of course not. nor would I condemn a CD for not playing well on a
Turntable! Ceramic cartridges with steel stylii were made for a
different medium than vinyl.

That is they produced CDs that sounded the same on all CDPs.


We don't even know that. They sounded the same on all the CDPs that
Dennis Drake used for his later comparisons.


Good enough in this case. *Two burners worked with all the CDPs
available
and one did not. *That one was most probably defective.


But they all worked with other CDPs. Why blame the burner only?


*Because it is the one constant variable in the failure.


No. If it only performs poorly in certain CDPs there are two
constants. The inferior test pressing and the inferior CDP.

*Obviously the "bad" CD players don't read the "bad" discs
as well as some CD players. But if the bad burner was equal
to the other burner... then all the discs would play on all the
CDPs. *Do we have a combination of marginal performance
stacking up to result in a failure? *Yes, we obviously do.


Then we finally agree. It was a case of an inferior pressing *and*
inferior CDP. where we disagree is on the matter of such things being
fairly catagorized as "defective."

If any one of the units is operating out of spec, my guess is,
its the burner. *But we have no data to prove it.


Well we do have some data that shows a difference in the performance
of the CDPs that made the sound of the one particlar test pressing
sound "thin" from the ones that did not.


wouldn't it be fair to say the CDPs that could play the inferior CD
test pressing is better than the ones that failed to play it properly?


*Sure. We've all seen that on early players with CDrs and first
generation
car players which suffered from vibe and shock.
Simply not relevant to current technology and completely irrelevant
to
the performance of the technology. *Those units are "outliers" today.
Basic mainstream, lowcost commodity players don't typically exhibit
any such problems.


But that is what was said of those players when *they* were
the :current" technology. The same was said of early transistor amps
back in the 60's when they were the current technology. Heck the same
was said of early CDs before they regularly used dither! I think we
may have a case of objectivists crying wolf due to their expectations
based on a reliance of specs and measurements that simply aren't
telling them the whole story.




The 3rd produced CDs that sounded fine on some players and not so fine
on others. *That tells me that unit was defective in that it produced
marginal
CDs that would not play without audible degradation on some CDPs.


In the same report he talks about th colorations of all but one A?D
converter. Does that mean that all those other widely used A.D
converters were also "defective?"


*Different issues. *CD burners should be capable of burning
and reading bit perfect. *A simple thing to test.
No measurable degradation.


Here is the kicker. All three CD test pressings were bit perfect.


*Even when read on the "bad" CDP?


yup.




But I would say that DACs should also be capable of producing
bit identical output from the same analog source if properly sync'd.
A more difficult test which given the scarcity of detail will most
likely
remain unknown.


And yet they failed to produce the same sound in Dennis Drake's
tests.


*DAC technology has clearly advanced over that time frame.
We know there are many variables influencing the quality of the
recording. *IMO, the choice of DACs today is a very minor influence
over the outcome.


that's another case of the same thing being said of the technology
back before those "advances." it seems we can find a remarkably
consistant pattern of the last generation's" transparent" technology
becoming "defective" by the next generation's standards.




*Are you suggesting that all this CD
are either universlly transparent or defective?


*Bit copying should transparent.


But in this case bit perfect CD test pressings apparently were not
equally well read by various CDPs.


Bit perfect as read by what?


Don't know. we would have to ask Dennis. but that is what he said.


*So we have a case of bit perfect
being less than transparent.


*I would be very surprised if this were true as read
by the "bad" CDP.


Well there you go. Surprise!!! It seems like the world of CD
mastering. manufacturing and playback has been full of surprises
despite the so called predictions of typical measurements throughout
the years.




D/A and then A/D can be transparent as well today if done right.
Drakes finding of 15 years ago aren't relevant to current technology
IMO.


Which would matter I suppose if one were to exclude all CDs produced
prior to the advent of the "current technology."


As one must always do when evaluating SOTA.


As a real world audiophile interested in getting better sound from my
favorite recordings i don't limit my evaluations to SOTA. how many
recordings are SOTA to begin with? A handful of mostly musically
trivial recordings at best. For my purposes evaluations have to take
in the practical every day execution too.


IMO, even today when the quality of vinyl is much improved over the
70's
and 80's (and we pay for it), *the probability of getting a poor piece
of
vinyl due to the manufacturing process is much greater than a poor CD
due to it's manufacturing process. *There remains a huge variance in
manufacturing quality between the premier record labels like
Classic Records, Analogue Productions, Speakers Corner and
lesser producers like Sundazed and Rhino. *As vinyl "surges"
the number of poor producers and poor products has surged as well.


I would say that is not the case with Rhino these days. But what good
does that do any of us who bought CDPs and CDs that suffer from
audible deficiencies? I'd rather focus on discovering real world
problems with certain CDPs and CDs than just ignore them because there
are unrelated issues with other formats. that makes no sense to me.

Reply
Thread Tools
Display Modes

Posting Rules

Smilies are On
[IMG] code is On
HTML code is Off


Similar Threads
Thread Thread Starter Forum Replies Last Post
Top 100 Reasons For Despising Arnie Jenn[_2_] Audio Opinions 1 February 5th 09 01:22 AM
About Arnie K tubeguy Audio Opinions 184 July 22nd 05 07:40 AM
rec.audio.Arnie.Krueger Willi Audio Opinions 10 March 3rd 05 02:26 PM
*Thank Heaven For Arnie Kroo* Le Lionellaise Audio Opinions 0 September 15th 03 01:44 AM


All times are GMT +1. The time now is 06:26 AM.

Powered by: vBulletin
Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Copyright ©2004-2024 AudioBanter.com.
The comments are property of their posters.
 

About Us

"It's about Audio and hi-fi"