From: Doug Sutherland (wearable_at_earthlink.net)
Date: 2001-12-09 02:42:12
Jim,
> For instance, I'm interested in the possibility of computer
> control with something more efficient than a keyboard --
> however I'm not confident that EEG is the right way to go
> with this (like -- "Is there enough bandwidth ?", "Could
> response-time be any faster than me reaching for my
> keyboard ?" -- three or four characters a minute (from a
> recent report linked from here) isn't fast enough for me.
I've been studying and playing with many forms of alternative
and "multimodal" input/output methods for several years
(for wearable/mobile computing). Practically speaking there
is still no good replacement for keyboards for truly generic
input like composing email or using a telnet session. I use
single handed chordic keyboards (twiddler) which allows me
much better mobility and ergonomics.
Having said that, there is a wide scope of application
functions that require much less complex interaction, this
is where alternative IO becomes interesting. I am working
towards a sort of layered set of multiple HCIs that can
be used in different contexts (all with wearables). For
complex input I stick with chording keyboards. For pure
dictation I also use speech recognition. For simple
"command and control" I use different modallities like
a few buttons and/or IR remote control driving a menu.
I don't think EEG will replace a keyboard any time soon.
But it can certainly be used to do interesting things.
And the more research the better for future applications
for quadroplegics and people with poor motor skills.
Check these out to see some of the progress that is
is already being made:
http://home.earthlink.net/~wearable/biopsy/#symbiosis-multi
http://home.earthlink.net/~wearable/biopsy/#symbiosis-eeg
Regarding EEG and HCI, there are some interesting simple
control applications, but software modality needs to be
radically changed, this is one of my key interest areas.
Beyond direct control, there are also applications in
affecive (emotive) and context-aware functionality, this
is where I really want to do some reasearch with using
not just biofeedback (EEG/ECG/EMG/GSR/temp), but also
gesture tracking and soilar ideas. Check out these for
some idea of what has been done at MIT and GA tech etc:
http://home.earthlink.net/~wearable/biopsy/#affective-comp
http://home.earthlink.net/~wearable/biopsy/#contextual-comp
> necessary for me to dynamically alter my level of
> consciousness, which is what the brain-waves seem to
> represent, just in order to do I/O?
You won't be using biofeedback to type, but ...
> Isn't this a bit crude ?).
No, I would say the current modalities are crude. If I
am walking down the street and want to do something in
the current modality I need to plug in a keyboard/mouse
and also some traditional display, ie head mounted VGA.
I've demonstrated integrating a small text menu (LCD)
and a few buttons (menu control) into clothing that
allows me to do a surprising number of things. They
are all custom programmed and "fixed" functions but
that's okay. Similar things can be done with speech
command-control. Regarding biosignals, it would be
interesting if the computer could sense my overall
physiological state and adapt its interface(s) based
on that. This is the target space for my research in
biofeedback and HCI (and also OpenEEG eventually).
Rosalind Picard at MIT is a pioneer in this area:
http://vismod.www.media.mit.edu/people/picard/
http://www.media.mit.edu/affect/
For a very long list of alternative IO modalities:
http://billbuxton.com/inputSources.html
-- Doug
This archive was generated by hypermail 2.1.4 : 2002-07-27 12:28:33 BST