From: Rob Sacks (editor_at_realization.org)
Date: 2001-08-09 12:41:09
Hi Joerg,
> The greatest deviation I see, is about 40usec.
> That is 1% jitter at 250 Hz sampling clock.
Is this big or small? :) I have no idea.
This question came up before because somebody
in this group asked me privately whether Windows
was sufficiently accurate to provide the timing
for the EEG-soundcard idea. That's why I wrote the
little test program.
I told the person who asked me that I didn't think
Windows was good enough for this purpose, but
maybe I owe him a correction and an apology.
Is it possible to calculate the error in the FFT
frequency domain that will result from the error
in the time intervals between measurements? I didn't
know how to do this.
If you want, Joerg, I'll write a new test program using
the rtsdc instruction and generate similar numbers
from other machines. The numbers I sent
earlier are from an 800 MHz machine.
> However I assume we have to use multiples
> of 1 ms ? So we can have 500Hz, 333Hz, 250Hz,
> 200Hz etc. but no power of 2 frequencies.
Correct.
> The PIT (8253) is clocked with 1.19318 MHz.
> The 16-bit divider has been set to 65536 under DOS
> and produced the legendary 18.2 Hz
Yes, it's coming back to me. It's very frightening
that I could forget this because there was a time
in my life when I did a lot of programming for
that chip. I once spent a week trying to avoid the
need to check for overflow by setting two of the
three counters on the chip to relatively prime
divisors so I could latch both simultaneously and
get a unique number that could be higher than the
overflow value... something like that, I forget
exactly! :) Is such a bad memory normal for 48
years old? Maybe I meditate too much. :)
> 1 / 1.19318 Mhz gives 0.838 usec ... and that is your
> .8 usec Windows timer.
> I am pretty sure it has nothing to do with RDTSC.
I don't mean to argue -- it's not worth it! -- but
just amusing ourselves with wondering --
My guess is that the .8 usec Windows timer is
emulating the timer chip frequency, but not using
the timer chip, because to get that precision it
would have to service the clock interrupt at the
maximum frequency and that would waste a lot
of cycles (I don't think the system normally requires
that level of precision). I have a book here somewhere
on Windows internals that probably gives the
explanation but I can't find it.
More reasons for my guess: Microsoft didn't introduce
the .8 usec timer until the CPU counter was added
by Intel; and with the other, older timer services,
the application programmer has to specify the level
of desired precision before using the services, which
suggests to me that the clock-interrupt rate is kept
as slow as possible and only increased when
necessary.
> What does out of order execution mean ? (something with
> the pipelines ?)
> And why should this be a problem ?
It's an optimization technique. Intel introduced it with
the Pentium Pro but I think all modern CPUs do it.
The CPU has the ability to execute instructions
in a different order from the order you wrote them in.
I don't know all the reasons for it but one reason is
that it can minimize memory latency. For
example, if a particular instruction requires an
expensive memory access, the CPU doesn't wait for
the memory data to arrive, but instead it keeps working
on future instructions. When the data arrives, it goes
back and executes the slow instruction in the wrong
order.
Intel gives this example on its website to show why
this could be a problem for time measurements.
Suppose you want to measure the time rquired by the
fdiv instruction, so you write:
rdtsc ; read time stamp
mov time, eax ; move counter into variable
fdiv ; floating-point divide
rdtsc ; read time stamp
sub eax, time ; find the difference
Intel warns that this might not work because
both rdtsc's may execute before the fdiv.
But it's easy to avoid this problem using the
CPUID instructions or some other instruction
that forces all previous instructions to complete.
Maybe the CPUID instruction isn't required by
the code you quoted earlier because maybe the
C function call and stack frame causes the
compiler to issue an instruction that has the same
effect. Intel says that CPUID isn't the only
instruction with this effect.
By the way, the Intel article is here:
Regards,
Rob
This archive was generated by hypermail 2.1.4 : 2002-07-27 12:28:32 BST