Abram,

Point taken, but the question is simply: how accurate is their datamining? We discussed this here a few months ago in relation to other research, but I note this is v. recent stuff - and I'd be interested to know exactly what these "neural fingerprints" are, and how reliable.

Abram:What I find interesting about this sort of thing is the extent of
reliance upon datamining methods. In short, neuroscientists are using
artificial intelligence to try to figure out how real intelligence
works! I find the situation amusing.

The situation could eventually turn into a positive feedback loop,
with better understanding of the brain fueling better datamining
methods that yield further insight into the brain-- but right now I
think it is fair to say that is not the situation. Even if the team
used artificial neural nets as their technique of choice, the software
probably does not have a significant neurological inspiration. (I have
a friend who uses the standard feedforward neural nets to examine
brainscan data, for example.) Advances in datamining are being made
with essentially no guidance from neuroscience. So *if* things keep on
the present course, a full understanding of the brain might be made
possible only via already-created AGIs.

-Abram Demski

On Wed, Nov 12, 2008 at 7:51 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
[This seems v. important. It seems to hold out the promise of eventually
recording inner monologues, which IMO would be revolutionary for cog. sci.
Comments?]

http://www.sciencedaily.com/releases/2008/11/081110071240.htm

Neuroimaging Of Brain Shows Who Spoke To A Person And What Was Said

ScienceDaily (Nov. 10, 2008) — Scientists from Maastricht University have
developed a method to look into the brain of a person and read out who has
spoken to him or her and what was said. With the help of neuroimaging and
data mining techniques the researchers mapped the brain activity associated
with the recognition of speech sounds and voices

In their Science article "'Who' is Saying 'What'? Brain-Based Decoding of
Human Voice and Speech," the four authors demonstrate that speech sounds and
voices can be identified by means of a unique 'neural fingerprint' in the
listener's brain. In the future this new knowledge could be used to improve
computer systems for automatic speech and speaker recognition.

Seven study subjects listened to three different speech sounds (the vowels
/a/, /i/ and /u/), spoken by three different people, while their brain
activity was mapped using neuroimaging techniques (fMRI). With the help of
data mining methods the researchers developed an algorithm to translate this brain activity into unique patterns that determine the identity of a speech
sound or a voice. The various acoustic characteristics of vocal cord
vibrations (neural patterns) were found to determine the brain activity.

Just like real fingerprints, these neural patterns are both unique and
specific: the neural fingerprint of a speech sound does not change if
uttered by somebody else and a speaker's fingerprint remains the same, even
if this person says something different.

Moreover, this study revealed that part of the complex sound-decoding
process takes place in areas of the brain previously just associated with
the early stages of sound processing. Existing neurocognitive models assume
that processing sounds actively involves different regions of the brain
according to a certain hierarchy: after a simple processing in the auditory
cortex the more complex analysis (speech sounds into words) takes place in
specialised regions of the brain. However, the findings from this study
imply a less hierarchal processing of speech that is spread out more across
the brain.

The research was partly funded by the Netherlands Organisation for
Scientific Research (NWO): Two of the four authors, Elia Formisano and
Milene Bonte carried out their research with an NWO grant (Vidi and Veni).
The data mining methods were developed during the PhD research of Federico
De Martino (doctoral thesis defended at Maastricht University on 24 October
2008).

Journal reference:

1. Elia Formisano, Federico De Martino, Milene Bonte, Rainer Goebel. "Who"
is Saying "What"? Brain-Based Decoding of Human Voice and Speech. Science,
November 2008

Adapted from materials provided by NWO (Netherlands Organization for
Scientific Research).



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com




-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=120640061-aded06
Powered by Listbox: http://www.listbox.com

Reply via email to