From: Sar Saloth (sarsaloth_at_yahoo.com)
Date: 2002-03-14 03:56:50
At 12:08 AM 2002-03-14 +0000, you wrote:
>Sar Saloth wrote:
> > This brings up an annoyance with EDF - they organize the channels
> > contiguously in one record, so that if your record size was larger (due to
> > the desire to keep the low data rate signals low) then you are up to the
> > latency of a record. -
> > explanation:
> > Every data record has identical size and channel grouping, so the slowest
> > rate signal must be present in every data record. Either one pads and
> > wastes space (by duplicating slow channels) or one accepts the latency is
> > at least as bad as the low data rate. (for comm purposes).
>
>This is probably a reason to use a different format on the serial
>link, for example interleaving the fast-changing samples, rather than
>putting 24xchannelA, then 24xchannelB, then 3xchannelC, which means
>you always have 24 samples of delay before you see any data.
Yes, that makes complete sense from a latency point of view and from a
simplicity point of view, and also that is the way the hardware would
essentially work.
Except for the point of whether or not this is pure EDF, this is a tiny
difference isn't it? Changing just the "interleave" would be a tiny bit of
code wouldn't it? I would want to avoid having multiple ways of specifying
channels, labels, gains etc..
I have looked at your "OpenEEG-1.0" specification but I couldn't understand
how it is compatible your above example. My lack of comprehension was the
reason I ignored it.
As far as I am concerned, if the headers are encoded in a similar enough
manner that conversion is easy, then conversion between your above
"interleaved" format to EDF would be nearly trivial so I wouldn't consider
it an issue. However, your "OpenEEG-1.0" was very different in some of the
headers.
> > Does that throw EDF out of the window for future CheapEEG communication
> > formats? (this is separate from the actual file format issue).
>
>Not necessarily -- it might be possible to keep data records in memory
>(even long ones, maybe many seconds) and build them up bit by bit, and
>then tell other routines "you have some new samples at offsets N-M" as
>the values are written in. When reading back from disk, you could
>give them the whole lot in one go, though.
Yes, and an excellent reason for not tying the two formats together is that
a tiny micro-controller could get swamped reordering the data at a high
data rate, but once into a PC, such a thing is trivial.
OK, I have been convinced, pure EDF is not such a great idea for the binary
data.
>This is just one way, and how to structure this whole thing internally
>is a big question. Dave has been working on a framework based on
>multiple threads with pipes and/or sockets to communicate between them
>(if I've understood it right). He has some working code in C++ for
>Linux. However I'm more used to an event-driven kind of approach,
>where you have one thread, and all the objects have call-backs, and
>when data is available, you call the call-back of the next object,
>saying "here is a new sample for you to process".
>
>This is a big question and it is getting too late here to think about
>this right now.
>
>
> > Wow, 1/32s latency should be OK for any display requirement,
> > shouldn't it? I mean, movie films are only 24 frames per second,
> > and that is regarded as fluid. I am ignorant on NF training so if I
> > am wrong, someone just tell me but don't try to debate it, I have
> > nothing to debate that with.
>
>I don't know either, but if we can keep the delays down without too
>much extra trouble, then that seems like a good idea to me.
>
>Jim
Agreed.
Sar
_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com
This archive was generated by hypermail 2.1.4 : 2002-07-27 12:28:40 BST