Re: [buildcheapeeg] brain wave sonification

From: Jim Meissner (jpmeissner_at_mindspring.com)
Date: 2002-01-26 18:43:04


Dear Jim Peters:

> The next is from "downey.dat", at a point where there seems to be
> alpha and delta activity (if I've got the rate right).

In all the files you will see a small (and sometimes not so small) amount of 60 Hz pickup. That will give you the "calibration" you need. The sampling rate was about 130 Hz and not well controlled.

I am fascinated with the work you are doing. I clicked on the files, but could not figure out what I was looking at? Please tell me what to look at the files with? My computer showed it as a BMP file. Help?

Juergen P. (Jim) Meissner
Check out my Website at www.MeissnerResearch.com
Read about the benefits of the Brain State Synchronizer sounds for improving your life and health.
----- Original Message -----
From: Jim Peters
To: buildcheapeeg_at_yahoogroups.com
Sent: Saturday, January 26, 2002 1:21 PM
Subject: Re: [buildcheapeeg] brain wave sonification

Doug Sutherland wrote:
> Given your interest/background in audio,I thought you might
> find this interesting (you may have seen it on mind-l list)

Thanks -- I'm not on mind-l.

> Here is a brain wave sonification
> http://easyweb.easynet.co.uk/~pppf6/Masahiro/vIOCeApplet/SoundBrain.html

Yes, this is basically the same idea as what I'm working on, except
I'm hoping to use the phase information to get much more accurate
tracking of frequencies. I've got a copy of the source from the site,
which I'll probably have a look at -- he talks about smoothing on the
site.

> BTW I have seen at least one serious EEG app that uses
> wavelets. See this post from the biofeedack yahoo group
>
> http://groups.yahoo.com/group/biofeedback/message/3617

The thing is that the idea of wavelets is great -- reducing the signal
down to localized wave packets. However, the official definition of
the wavelet transform means that even a pure low-frequency sine-wave
will leave signals in all the upper bands, unless you have a perfect
step low-pass filter, which isn't possible without having all the data
for all time before you start work. So as far as I can tell, for this
application, the strict wavelet transform isn't much use.

> I have a keen interest in different "modalities" beyond the
> traditional, esp human interfaces. I think there are lots
> of uncharted waters in EEG feedback to explore, beyond the
> raw waves and traditional representations.

I've got my code working now, and I've put a few PNGs up on the 'net
if anyone is interested. This is just what I'm using to visualise the
output of the filterbank.

The bright spots are the cursors, which are moving rightwards,
overwriting older data (they wrap around, as you can see). They are
in a curved shape to take account of the delay, to make sure that
features line up vertically. The frequencies are marked along the
LHS, with colours to indicate the approximate brain-wave bands. These
are approximate, because I don't know the precise sampling rate of
these files. Brightness indicates intensity, and hue represents phase
relative to the band centre-frequency 0 phase.

The first shot is from Jim-M's log001.dat, which shows an increasing
test tone:

http://uazu.net/temp/demo1.png

The next is from "downey.dat", at a point where there seems to be
alpha and delta activity (if I've got the rate right).

http://uazu.net/temp/demo2.png

The last shows how clipping shows up:

http://uazu.net/temp/demo3.png

This is using AM plus a single IIR low-pass filter for each band. The
delay is roughly 4 cycles of the centre frequency. I also have a
version that uses two filters per channel and gives a much cleaner
separation between bands, but it gives a longer delay (8 cycles) and
it also seems to smooth over some of the detail. (I haven't found out
yet what's causing the ripples -- they don't happen on the dual-filter
version.)

I've also looked at using AM plus a window (since a window is just a
low-pass FIR filter), and this gives a similar band width for the same
delay as the single-IIR above. It would require much more processing,
and it doesn't have quite such a clean response away from the
band-centre, but it would guarantee how much information gets
included.

If I wasn't aiming for real-time stuff, there is a lot that could be
done by doing AM in every possible frequency, and then searching
through this using a variable window. A narrow window gives an
average of information over a tall and narrow rectangle, and a wide
window gives an average of information over a wide and short
rectangle. By searching around with a variable window, you could
start with a blurry view, and automatically make it sharper, looking
for interesting features.

In any case, I want to try and see if I can turn the output I've
already got into sound, to see if that would be a useful tool or not.

The source code for the app is here:

http://uazu.net/temp/demo-20010126.zip

However, this is a very crude hack in many places, and a lot of this
will need rewriting before use elsewhere. It's only really for anyone
who is interested in the code. It's under the GPL. Here's an example
command-line:

./filter -q 4 -j 2/5 jim_meissner_data/downey.dat 120

This all might seem like a huge waste of time for those people who
just want a waterfall FFT display, but I'm learning a huge amount. I
don't know how much of this is going to be of use -- maybe the best
solution is going to be something quite different in the end.

Jim

-- 
Jim Peters (_)/=\~/_(_) Uazú
(_) /=\ ~/_ (_)
jim@ (_) /=\ ~/_ (_) www.
uazu.net (_) ____ /=\ ____ ~/_ ____ (_) uazu.net

Yahoo! Groups Sponsor

------------------------------------------------------------------------

To unsubscribe from this group, send an email to: buildcheapeeg-unsubscribe_at_egroups.com

Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.



This archive was generated by hypermail 2.1.4 : 2002-07-27 12:28:37 BST