From: Joerg Hansmann (info_at_jhansmann.de)
Date: 2001-08-09 19:16:44
Hi Rob ,
----- Original Message -----
From: Rob Sacks <editor_at_realization.org>
To: <buildcheapeeg_at_yahoogroups.com>
Sent: Thursday, August 09, 2001 1:41 PM
Subject: Re: [buildcheapeeg] comADC-EEG
> Hi Joerg,
>
> > The greatest deviation I see, is about 40usec.
> > That is 1% jitter at 250 Hz sampling clock.
>
> Is this big or small? :) I have no idea.
I think, it is no problem, because the sample clock
is far away from the upper -3db corner frequency of the
amplifier.
Another thing is, that the comADC produces sample clock
phase deviation of up to 90 degrees that is input voltage
dependant by itself. That is simply, because 0 V input delivers
the AD-value at once, while max input delivers the value after
1 ms.
However this jitter and phase deviation can be easily
compensated by software, if the exact point of time
(either from PIT or RDTSC) is stored together with the AD-value.
> This question came up before because somebody
> in this group asked me privately whether Windows
> was sufficiently accurate to provide the timing
> for the EEG-soundcard idea. That's why I wrote the
> little test program.
>
> I told the person who asked me that I didn't think
> Windows was good enough for this purpose, but
> maybe I owe him a correction and an apology.
>
> Is it possible to calculate the error in the FFT
> frequency domain that will result from the error
> in the time intervals between measurements? I didn't
> know how to do this.
Yes. It can be done. My comADC DOS-Software has a
simulation mode that calculates a sinewave signal
that is used to test the FFT.
I could simply add a random phase to simulate
the 1% jitter.
> If you want, Joerg, I'll write a new test program using
> the rtsdc instruction and generate similar numbers
> from other machines. The numbers I sent
> earlier are from an 800 MHz machine.
That would be interesting. However I guess that
the jitter will essentially stay the same, but you
can measure it with greater accuracy.
> > However I assume we have to use multiples
> > of 1 ms ? So we can have 500Hz, 333Hz, 250Hz,
> > 200Hz etc. but no power of 2 frequencies.
>
> Correct.
>
> > The PIT (8253) is clocked with 1.19318 MHz.
> > The 16-bit divider has been set to 65536 under DOS
> > and produced the legendary 18.2 Hz
>
> Yes, it's coming back to me. It's very frightening
> that I could forget this because there was a time
> in my life when I did a lot of programming for
> that chip. I once spent a week trying to avoid the
> need to check for overflow by setting two of the
> three counters on the chip to relatively prime
> divisors so I could latch both simultaneously and
> get a unique number that could be higher than the
> overflow value... something like that, I forget
> exactly! :) Is such a bad memory normal for 48
> years old? Maybe I meditate too much. :)
I always thought meditation were good for memory ?
A friend has told me about earliest childhood memories
in a deeply relaxed state.
>
...
> My guess is that the .8 usec Windows timer is
> emulating the timer chip frequency, but not using
> the timer chip, because to get that precision it
> would have to service the clock interrupt at the
> maximum frequency
That is not necessary. The timer chip could be set to 1193
(the exact but impossible value would be 1193.18)
and would then produce one interrupt each 1 msec.
If you want to check the time in between the interrupts,
you simply read out the timer counter register, that
is clocked with 1.19318 Mhz.
> and that would waste a lot
> of cycles (I don't think the system normally requires
> that level of precision).
> I have a book here somewhere
> on Windows internals that probably gives the
> explanation but I can't find it.
>
> More reasons for my guess: Microsoft didn't introduce
> the .8 usec timer until the CPU counter was added
> by Intel; and with the other, older timer services,
> the application programmer has to specify the level
> of desired precision before using the services, which
> suggests to me that the clock-interrupt rate is kept
> as slow as possible and only increased when
> necessary.
I must admit that I have not the slightest experience with
windows programming.
> > What does out of order execution mean ? (something with
> > the pipelines ?)
> > And why should this be a problem ?
>
> It's an optimization technique. Intel introduced it with
> the Pentium Pro but I think all modern CPUs do it.
...
> Intel warns that this might not work because
> both rdtsc's may execute before the fdiv.
Yes , I understand.
> But it's easy to avoid this problem using the
> CPUID instructions or some other instruction
> that forces all previous instructions to complete.
...
> By the way, the Intel article is here:
>
> http://cedar.intel.com/cgi-bin/ids.dll/content/content.jsp?cntKey=Legacy::irtp_RDTSCPM1_12033&cntType=IDS_EDITORIAL
>
Thanks. I will have a look at it.
Regards,
Joerg
This archive was generated by hypermail 2.1.4 : 2002-07-27 12:28:32 BST