Re: [buildcheapeeg] EDF and DDF file formats

From: Dave (dfisher_at_pophost.com)
Date: 2002-03-13 22:57:44


On Wed, 13 Mar 2002 16:28:16 -0500, Sar Saloth wrote:

>As far as the data file goes, I would continue writing the data, including
>the junk and bad samples to get a complete character and not get the data
>out of synch, and tag it with the suggested "Bad Data Marker" bit map
>stream. That would fit in with your current logic flow, it wouldn't impact
>much else, and if the BAD DATA Marker were at the end of the record, you
>could tag it afterwards. That way your EDF would always line up right.

Actually, by doing the above I am thinking that it does not impact the logic
flow at all since it would be up to the "FileStorage" class to handle the error
bit (and thus padding out the rest of the data record with zeros if that is the
way we want to go). Unless.... getting out of sync would also disturb other
processes, like FFT filters. Jim-P, I think I recall you saying that it is
worse to have missing data rather than zero-value samples representing that
missing data for the FFT filters. Is this correct? Shoot--I wish I knew how
other packages handled this and how big an issue it really is.

>This brings up an annoyance with EDF - they organize the channels
>contiguously in one record, so that if your record size was larger (due to
>the desire to keep the low data rate signals low) then you are up to the
>latency of a record. -
>explanation:
>Every data record has identical size and channel grouping, so the slowest
>rate signal must be present in every data record. Either one pads and
>wastes space (by duplicating slow channels) or one accepts the latency is
>at least as bad as the low data rate. (for comm purposes).
>Does that throw EDF out of the window for future CheapEEG communication
>formats? (this is separate from the actual file format issue).

Actually, I'm not sure this is even an issue with the EEG devices being
developed here, because the sample rate will be the same for all the channels.
I think the only time you would run into this problem is with a device that
transmits multiple modalities at differing rates and/or the mulitple device
scenario that I brought up earlier.

>>I like doing it this way because I am able to send the data out immediately,
>>and I have to think that it would be rare to receive a corrupt packet
>>set. But
>>what do you (or others) think? Would it be better and/or indifferent to send
>>data out in 8 sample bursts (for EEG) and 3 sample bursts (for non-EEG) data
>>for this device? Is there a situation where it would matter?
>
>Wouldn't the only real difference be latency? Would such latency be adequate?

I think so, which would be more of an issue for feedback/stimulation.
Personally, it just seems cleaner to have as continuous and consistent stream
rather than sudden bursts, even if those "bursts" are only approximately 1/10
of a second each. (I mistyped above, the "8" should be "24" for EEG sample
data. Thus, there are about 10.66 EEG sample sets every second to get 256
samples/sec.)

>>Does this effect data storage using the methods we have been talking about
>>since we are storing "chunks" and if I suddenly realize that I am no longer
>>synced on correct data boundaries? Should I just discard whatever I have of
>>the current set, or save it even though this might mean that I only have
>>say, 6
>>EEG samples (and no non-EEG samples) that I can reasonably be sure of at that
>>point for the current packet set?
>
>As along as you save complete EDF records and mark them as bad then your
>current scheme should be OK, is that right?

Theoretically, I think so. It might show up as odd for EDF viewers that did
not recognize the error channel, but I don't know how important that is to this
project anyway.

>Wow, 1/32s latency should be OK for any display requirement, shouldn't
>it? I mean, movie films are only 24 frames per second, and that is
>regarded as fluid. I am ignorant on NF training so if I am wrong, someone
>just tell me but don't try to debate it, I have nothing to debate that with.

I think people hold all sorts of views about this. For me, what is important
is not whether we are as accurate as possible to the actual firing of the
neuron to the feedback you see on the screen (or hear, or feel, or whatever) as
it is to be accurate *enough* for feedback and stimulation to be useful and
appropriate. I don't have enough experience yet to know exactly where that
"line" is, so I want to err on the fast side if I can. I recall Chuck Davis
(the creator of ROSHI) saying something about this earlier this year, as the
question came up. Here's his reply:

===

>I'm curious though... what's the maximum latency
>that your stim signal requires? In other words,
>how quickly does the CPU have to respond to
>the signal at the serial port to make the stim
>work?

ROSHI responds to each and every byte that appears
at the input buffer, separately; 128 sps.

My segmented recursive DFT, in the ROSHI(AVS) software,
operates on every sample, therefore, all operations,
DSP and photo/magstim operate at 8 millisec; *brainstem* speeds :)

===

That seems fast enough. :)

Dave.



This archive was generated by hypermail 2.1.4 : 2002-07-27 12:28:40 BST