Re: [buildcheapeeg] Software architecture

From: Jim Peters (jim_at_uazu.net)
Date: 2002-02-12 16:23:21


Moritz von Buttlar wrote:
> Are there any advances in the software archtecture question ? I
> didn't read much about this in the last week, but maybe I missed
> something. We should decide something so that we can start working
> on software !

I've written a file-interface layer today which does all the
conversion from whatever EEG file type you give it into arrays of
'floats' in memory (32-bit floating point numbers). It never loads
the whole file into memory, so it can handle *huge* files. It also
allows random access through the file, so you can go backwards,
forwards, whatever. It also handles sync-loss and allows that to be
reported to the user.

New file formats can be supported by writing a couple of short
routines and adding a link to a table. At the moment I've only
written code to handle Jim-M's files, but more can be easily added.
This suits the app I'm writing right now, and I can extend it to write
to the file as well, which is what we'd need for a recording app.

Looking at your ideas, I think you're aiming at something much more
general than what I'm writing.

> Here are some of my ideas, please look at attachement for understanding.
>
> - every biofeedback device has as an output real numbers. No matter if
> temperature, EMG or whatever. so all we have to do is make a
> flexible way of dealing with these numbers.
>
> - from input to output, data is processed by reading a value at one
> address and then writing it to another
>
> - protocol = array of the above addresses + some parameters
>
> - we make simple data processing blocks (FFT, average, threshold,
> digital filter) and by combining these blocks, the user can generate
> his/her own favorite protocol. The blocks can have one or multiple
> outputs/inputs (e.g. fft = real number input and block of real
> numbers output. )

There is a problem here for my app. I want to allow the user to move
forwards and backwards through the file, a bit like using a browser or
pager except that scrolling is horizontal. Also, the code I'll be
using for analysis needs to work on large chunks of data at a time,
i.e. the chunk of data corresponding to a screen-width on the display.

> - combining can be made by a graphical editor (can be added later)
>
> - for embedded devices: everything the same, only the last (output and
> displaying) part can be changed to appropriate modules
>
> - only problem: scheduling
>
> What do you think ? Please send in more ideas and let's combine them
> to get something started.

I think what you're designing isn't suited for what I'm working on.
I'm guessing this is going to be for real-time biofeedback, right ?
If so, I'll leave it for everyone else to discuss this whilst I get on
with my analysis app.

Jim

P.S. Have a look at aRts (the Analogue Real-Time Synthesizer) if you
really want to use this plugin idea. It seems like over-kill to me,
though. "aRts" is the audio system used within KDE (kde.org). It has
a graphical editor too. It would be a huge amount of work to create
something like this -- it's like writing a visual programming
language:

http://www.arts-project.org/
http://kde.org/

You might also look at 'ecasound' which allows modular processing of
streams of data, but without a graphical interface. Maybe this could
be adapted. There are also several other Linux-based systems that
allow modular processing of audio streams like this:

http://sound.condorow.net/
http://www.ladspa.org/

-- 
Jim Peters (_)/=\~/_(_) jim_at_uazu.net
(_) /=\ ~/_ (_)
Uazú (_) /=\ ~/_ (_) http://
B'ham, UK (_) ____ /=\ ____ ~/_ ____ (_) uazu.net


This archive was generated by hypermail 2.1.4 : 2002-07-27 12:28:38 BST