Re: [agi] None of you seem to be able ...

2007-12-06 Thread Scott Brown
Hi Richard,

On Dec 6, 2007 8:46 AM, Richard Loosemore [EMAIL PROTECTED] wrote:

 Try to think of some other example where we have tried to build a system
 that behaves in a certain overall way, but we started out by using
 components that interacted in a completely funky way, and we succeeded
 in getting the thing working in the way we set out to.  In all the
 history of engineering there has never been such a thing.


I would argue that, just as we don't have to fully understand the complexity
posed by the interaction of subatomic particles to make predictions about
the way molecular systems behave, we don't have to fully understand the
complexity of interactions between neurons to make predictions about how
cognitive systems behave.  Many researchers are attempting to create
cognitive models that don't necessarily map directly back to low-level
neural activity in biological organisms.  Doesn't this approach mitigate
some of the risk posed by complexity in neural systems?

-- Scott

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73399933-fcedd2

Re: [agi] Numenta: article on Jeff Hawkins' AGI approach

2006-06-02 Thread Scott Brown
Hi all,The way I've read Hawkins' and company's work so far is that they view HTM as a cognitive engine that, while perceptually based, would essentially drive other cognitive functions, including behavior. I think you're right that they would agree that these additional cognitive functions would likely need extensions to the architecture. 
I think that perception has gotten short shrift in AI for a long time, so I'm very hapy to see that they're taking this approach (I am biased, however, being a Master's student under Stan Franklin at the University of Memphis working on -- you guessed it -- the perception module for Stan's LIDA system).
-- ScottOn 6/2/06, Mike Ross [EMAIL PROTECTED]
 wrote:
 The theoretical presumption here is that once you've solve the problem of recognizing moderately complex patterns in perceptual data streams, then you're essentially done with the AGI problem and the rest is just
 some wrappers placed around your perception code.I don't think soI think they are building a nice perceptual pattern recognition module, and waving their hands around arguing that it actually is just an exemplar for an approach that can be more general.
Some parts of the article definitely overemphasize the potential forperceptual pattern recognition to account for a large number ofcognitive processes.But I think that, ultimately, Hawkins et alprobably agree with your characterization of perception.For
instance, they spend some time discussing the need to hook up anexternal episodic memory module in order to get more powerfulbehavior.So surely, from an AGI perspective, they believe that HTMwould be just one (albeit important) element in a more complex system.
Mike---To unsubscribe, change your address, or temporarily deactivate your subscription,please go to 
http://v2.listbox.com/member/[EMAIL PROTECTED]



To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Numenta: article on Jeff Hawkins' AGI approach

2006-06-02 Thread Scott Brown
... they have a good chance of getting something thats about as smart as some dumb animalsI agree, Mike, and it seems to me that, from an AGI perspective (as opposed to an AI perspective), this is an excellent goal to have.
On 6/2/06, Mike Ross [EMAIL PROTECTED] wrote:
One of the more interesting ideas the Numenta people have is of how aperceptual system could be used in a motor-control system by hookingup expectations to actual commands.I think its fair to say that
Numenta is pushing towards AGI from the animalistic perspective.Oncethey hook up some memory and tie it in with a control system, it seemsthey have a good chance of getting something thats about as smart as
some dumb animals.To imagine how animals think, I always like toimagine the part of my consciousness that is driving a car while Imdriving and having a conversation.The conversation control is thehuman part of me.The car control is the animal mind. Im
guessing that if Numenta makes a lot of progress, they can get thatanimal mind.But the work described in that paper doesnt seem to havemuch to do with the human aspect of mind.


To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] who's looking for AI where?

2006-05-11 Thread Scott Brown
Tagalog.. interesting.
On 5/11/06, Eugen Leitl [EMAIL PROTECTED] wrote:
http://www.google.com/trends?q=artificial+intelligencectab=0date=allgeo=all
--Eugen* Leitl a href="" href="http://leitl.org">http://leitl.orgleitl/a http://leitl.org__
ICBM: 48.07100, 11.36820http://www.ativel.com8B29F6BE: 099D 78BA 2FD3 B014 B08A7779 75B0 2443 8B29 F6BE---To unsubscribe, change your address, or temporarily deactivate your subscription,
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]-BEGIN PGP SIGNATURE-Version: GnuPG v1.4.2.2 (GNU/Linux)
iD8DBQFEYztddbAkQ4sp9r4RAmriAJ9gTP2PRPUo9WNzYXF00rnoguP/sACbBxXWlLsSlPr3C0mC+fVs03I0ufg==w5Eh-END PGP SIGNATURE-

To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]