Home |
Search |
Today's Posts |
#1
|
|||
|
|||
Flac Gives Inconsisten Results (Intel vs. AMD)
I have downloaded v1.1.2.a and installed it on my Laptop and desktop.
Then I took the same .wav file and encoded it with the same parameters, i.e. flac "ref.wav" --best -T "artist=%a" -T "title=%t" -T "album=%g" -T "date=%y" -T "tracknumber=%n" -T "genre=%m" -T "comment=EAC 0.95beta2 / FLAC 1.1.2 --best".e. The result on my laptop was different then the result on my desktop. With that I mean that the size was slightly off (~100 bytes). The checksum calculated by a hex editor was different, too. Why is the result different? Shouldn't it be the same as FLAC uses lossless compression? Note that my laptop has an Intel Pentium M and my desktop a dual Athlon. The result on my laptop and a dual Xeon were identical. Therefore, it seems that not the multi-processor is the culprit but the type of CPU, i.e Intel vs. AMD. Any suggestion what is going on here? Rob |
#2
|
|||
|
|||
"rob" wrote in message ups.com... I have downloaded v1.1.2.a and installed it on my Laptop and desktop. Then I took the same .wav file and encoded it with the same parameters, i.e. flac "ref.wav" --best -T "artist=%a" -T "title=%t" -T "album=%g" -T "date=%y" -T "tracknumber=%n" -T "genre=%m" -T "comment=EAC 0.95beta2 / FLAC 1.1.2 --best".e. The result on my laptop was different then the result on my desktop. With that I mean that the size was slightly off (~100 bytes). The checksum calculated by a hex editor was different, too. Why is the result different? Shouldn't it be the same as FLAC uses lossless compression? Note that my laptop has an Intel Pentium M and my desktop a dual Athlon. The result on my laptop and a dual Xeon were identical. Therefore, it seems that not the multi-processor is the culprit but the type of CPU, i.e Intel vs. AMD. Any suggestion what is going on here? Rob The proof of the pudding, so to speak, is whether the FLAC file can be converted back to a WAV file which is bit-for-bit identical to the original. If so, I would happily make the assumption that the FLAC file also plays back with bit-perfect accuracy. It does not surprise me that a compression scheme might be 100 bits different ( out of millions) after the process of compression, particularly where 2 different computers were used. Mark Z. |
#3
|
|||
|
|||
Fran=E7ois Yves Le Gal wrote: On 28 Jul 2005 23:06:27 -0700, "rob" wrote: Therefore, it seems that not the multi-processor is the culprit but the type of CPU, i.e Intel vs. AMD. Any suggestion what is going on here? Flac could use processor specific routines in the compression process. For instance the PIVM is SSE2+ compatible while the Athlon is one generation behind, with only basic SSE compatibility. As Flac is pretty much encoding intensive, that wouldn't surprise me. Uncompress the files and check if there are any differences in the .wav bitstream. There should be none. The differences may also lie in the header or metadata information in the file. It would therefore be useful to know WHERE the differences are, not just that they exist. For example, metadata in a header may contain system ID or date information, and this probably will have no effect at all on either the compressed data or the uncompressed result afterwards. If Windouche wasn't such a piece of f*cking sh*t inoperating system and came with reasonable utilities like a hexdump or bin diff utility, it'd be pretty easy to figure this out. |
#4
|
|||
|
|||
dpierce wrote ...
If Windouche wasn't such a piece of f*cking sh*t inoperating system and came with reasonable utilities like a hexdump or bin diff utility, it'd be pretty easy to figure this out. I guess the problem with MSwin is that there are so many versions of those utilities avilable (and included, BTW) that you are confused by all the choices? Maybe blinders would make you feel more comfortable? |
#5
|
|||
|
|||
A 100 byte difference may be as simple as Flac having added metadata to
the encoded file saying what machine it was run on, from/in what directories, at what timestamp. I'd bet dollars to donuts (though not many of either) that thie actual audio content's the same. |
#6
|
|||
|
|||
The difference seems to be all over the file not just in the meta data.
Nevertheless, when I convert it back I get the exact same wav on both machines. So I guess everything should be fine. Rob |
#7
|
|||
|
|||
The difference seems to be all over the file not just in the meta data.
Nevertheless, when I convert it back I get the exact same wav on both machines. So I guess everything should be fine. I can envision a number of ways in which a given batch of data could compress into two or more different results (with the same basic algorithm being used) and all of the compressed results would expand back to the same original data. One fairly common way (you can see this with the standard Unix/Linux "gzip" program) is for the compressor to have different levels of compression which it can perform. Higher levels of compression require more CPU time, or more working memory to perform - there may be different-sized lookahead/lookback tables, for example. A "gzip -9" can result in a compressed file which is a few percent smaller than a "gzip -2", at the expense of several times more CPU cycles being required during compression. The decompression process (in this case, at least) requires the same amount of CPU time, and will produce identical results. It's also possible that the basic FLAC algorithm (which I haven't studied) can find two or more different data-paths to the same destination (the equivalent of "take one step, then two steps" or "take two steps, then one step"), and the choice of which it takes might depend on something as simple as whether it searches a given table from first element to last, or last to first. As others have suggested, this might be due to architectural differences in the CPU... doing one or another form of iterated loop might have a slight computational advantage on a PowerPC vs. an X86 vs. a MIPS. If the results of the decompression are identical, don't sweat it. -- Dave Platt AE6EO Hosting the Jade Warrior home page: http://www.radagast.org/jade-warrior I do _not_ wish to receive unsolicited commercial email, and I will boycott any company which has the gall to send me such ads! |
#8
|
|||
|
|||
I hope for you that $1=1 donut. The difference is not just in the
metadata. Actually, I would consider it more strange that the meta data is different on an AMD vs Intel then that the compressed audio data is different. Also, remember that I used the exact same settings on all 3 computer systems. |
#9
|
|||
|
|||
"rob" wrote in message
ps.com I hope for you that $1=1 donut. The difference is not just in the metadata. Oh, so the metadata *is* different. Actually, I would consider it more strange that the meta data is different on an AMD vs Intel then that the compressed audio data is different. It tends to support the hypothesis that the compression methodology is being tuned to better exploit different CPU chips. Also, remember that I used the exact same settings on all 3 computer systems. It seems like a larger variety of processors would be required to really understand what is going on. |
#10
|
|||
|
|||
Actually, I think the only difference in the meta data is some time
stamp. I would not put my hand into the fire for this statement, though. In any case, I did look into the reason for the different results. It turns out that they indeed use different code for different processors. They distinguish between CPUs that have MMX, SSE, SSE2, 3DNow, extended 3DNow, extended MMX, CMOV, FXSR. My desktop has an AMD MP (well, two of them) which supports MMX, SSE and 3DNow. My laptop has a Pentium M which supports MMX, SSE and SSE2. So the only difference is in 3DNow vs SSE2. It turns out that when they calculate some autocorrelation they favour 3DNow over SSE. This calculation is writen in assembly code and uses the processor specific instructions. It still astonishes me a bit that the result of the two routines is different. I would have assumed that the result should be the same but just the implementation to get these two results are different. Maybe it is because for SSE there are 4 different routines depending on the LPC (0-3, 4-7, 8-11, 12-32) whereas for 3DNow there is only one routine. So maybe they don't take it too exactly. It's probably not necessary anyways as long as they can calculate similarly good coefficients based on the autocorrelation. Obviously, that gives different differences and therefore a different final result. One interesting question is if you get the same result if you encode on one type of processor and decode on another one. From my limited understanding of looking at the code for a few minutes I have no reason to believe it would be different, though. Rob |
#11
|
|||
|
|||
Unless, or course there is a bug in the code. The -V option does not
help in this case because it only tests the coding/decoding if it's done on the same machine. BTW, how can you auto delete your message within x days? |
Reply |
Thread Tools | |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Forum | |||
EAC & FLAC Settings Question | Tech | |||
EAC & FLAC Settings Question | Pro Audio | |||
What are they Teaching | Audio Opinions | |||
A comparative versus evaluative, double-blind vs. sighted control test | High End Audio | |||
Richman's ethical lapses | Audio Opinions |