> 
> 
> Brad Wyble wrote, replying to Alan Grimes:
> > I'm just trying to give you a taste of the sophistications that
> > are relevant to brain function and cannot be glossed over.
> >
> 
> I know you were replying to Alan not me, but I'll make a comment anyway ;)
> 
> The unstable nature of neuroscience knowledge is why I decided not to try to
> emulate the brain in my AGI designs.

Excellent decision.  The brain is currently a big grey box from system to synapse.

I undertook the study of neuroscience(leaving the field of academic AI) to get a feel 
for the type of solutions that the brain employs, a frame of thinking so to speak.  
Having toiled away for years, I think I've gotten at least a glimpse of the class of 
solutions the brain uses, enough to guide my approach to AGI problems without copying 
the brain system by system.

I'm satisfied with this approach, as it has positioned me in a rarely occupied 
niche(halfway between computer and neuroscience) in thought space from which to make 
contributions.  


> I believe that the precision with which digital computers can do things,
> will allow intelligence to be implemented more simply on them than in the
> brain.  This precision allows entirely different structures and dynamics to
> be utilized, in digital AGI systems as opposed to brains.  For example, it
> allows correct probabilistic inference calculations (which humans, at least
> on the conscious level) are miserable at making; it allows compact
> expression of complex procedures as higher-order functions (a representation
> that is really profoundly unbrainlike); etc.


I'd be curious to hear more about what you mean by this last statement.  You are 
referring to the nature of nesting complex function calls within one another?  

Brad

-------
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]

Reply via email to