From: John Morrison (jmorrison_at_ahc.net.au)
Date: 2002-02-16 05:29:18
> John Morrison wrote:
> > > 'Back-tracking' is hard for some types of DSP algorithms to handle,
> > So the rewind/analysis facility will only work with certain types.
>
> Well, if you wanted to rewind with an IIR filter, you'd really need to
> feed it the previous XX seconds-worth of data so that it knows where
> it was. So it isn't straightforward.
Ok I see what the problem is.
I'll add that to the list of things to consider. :-)
>
> > > maybe we could make 'analysis' and 'biofeedback' two
> > > different parts of the program ?
> >
> > I'd prefer to make the one program/framework instead of wasted time
> > in parallel development.
>
> The trouble is that you might start multiplying the complexity if you
> put them together into the same framework (although they could happily
> live in the same program). I've read professionals talk about using
> one type of display whilst recording, and then using more advanced
> techniques later to see more detail. Some things are hard to do in
> real-time, but are easy once the data has been recorded.
I agree and the system will allow that.
When finished you will be able to record/view/use rewards/whatever the data
in real time then use a different protocol (Same framework and most of the
same modules) to replay it later with a totally different display.
Basically each time you load a protocol you have a different program from
the users view.
Don't you just love playing with Lego.
The only thing we HAVE to get right is the interface between the modules!
> For instance, if I'm looking at brain-waves with low frequencies, to
> measure the amount of a frequency at this moment *now*, I need to take
> into account both past and *future* data. In a real-time system,
> future data is not available yet, so all I can do is calculate the
> results for some time in the past, maybe 10-20 seconds ago. After the
> data has already been recorded, when you are reviewing a session for
> example, there is no problem in taking future data into account.
Yep understand perfectly!
> Do you remember the displays that I posted ? That was why the
> drawing-point curved off to the left for lower frequencies -- values
> for 'now' often aren't available until 'later' when you are working in
> real-time.
Ok I see now..........I Don't have any hardware YET so I'm relying on
members like you that have. :-)
> As don Juan says in one of Castaneda's books: "We think we are
> perceiving things as they are now, but in fact we are always
> remembering, remembering" (I'm paraphrasing it, because I can't find
> the part now) -- meaning we always perceive what happened a moment or
> a few moments ago. This is just the same as with the DSP analysis,
> and this comes right out of the maths, too. There is no other way it
> can be when you're dealing with waves.
> It's a bit like when something happens in the world too fast for you,
> and then only afterwards your brain figures out exactly what was what.
> It's just the same -- the real-time algorithms failed, but analysis of
> recorded data enabled an interpretation to be made.
>
> It gets deeper still -- we're looking the uncertainty principle
> directly in the face here, dealing with waves at these kinds of
> scales. You know the idea from quantum mechanics, how you can know
> the time exactly, but then the frequency is uncertain (i.e. blurred),
> or the other way around -- you can know the frequency exactly, but
> then you don't know exactly when it happened.
>
> This applies here too -- you can apply a very wide window function,
> which gives you great accuracy in the frequency domain, but blurs
> everything a great deal in the time domain. At the other extreme,
> applying a narrow window function gives you excellent time-resolution,
> but unfortunately blurs the frequencies, so you can't be sure of
> anything much frequency-wise.
>
> My app will allow this window-width to be easily changed, and I expect
> that you will be able to see this effect directly on the display --
> either a blurring vertically or horizontally.
>
> It's like the blurriness fits into a rectangle, and you can make it
> fat and short, or thin and tall, but never thin and short at the same
> time -- a bit like looking at the world through glasses that were made
> wrong. I've read about techniques that allow you to use a window that
> puts this 'blurriness' rectangle at an angle, allowing you to
> differentiate two close-spaced falling tones, for example. However,
> this is getting really advanced.
>
> Actually, I think this is what your brain is doing when it takes a
> moment or two to figure out what something is (both for vision and for
> sounds) -- it is trying various different ways of analysing the data
> until it finds something that makes sense.
Hmm very interesting and advanced as you say.
I'd love to have your input on the modular system so that it can accommodate
what you (And probably others) need for analysis.
If you can give us a succinct outline of what you need for your type of
analysis we can work it into the interface.
THIS goes for anyone else on the list!
Give us a succinct out line of the interface requirements that we need. (IE
from Hardware to Filter, Filter to Display, filter to reward, etc???????
> Jim
John
This archive was generated by hypermail 2.1.4 : 2002-07-27 12:28:38 BST