>> This is interesting. I strongly suspect AI has it very wrong.

Narrow AI pretty much *has* to get it wrong because getting it right pretty 
much requires/creates a seed AI.  AGI has had a lot of conversations about 
immediate feedback and self-correcting loops and how active is necessary -- but 
no one is 100% sure how to do it yet.  Novamente and Texai are trying to do 
some of it (NARS may be as well but I'm not watching it as closely).

>> Day-to-day individual knowledge-gathering, if this is correct, is v. like 
>> the collective knowledge-gathering of science. 

I would strongly agree -- particularly if you look at the mind as a collection 
of agents.

>> Something to do with: "if you teach someone, they'll never learn."

I sort of disagree with this.  Teaching certainly gives you data, information, 
and *some* knowledge -- but it does require practice to integrate and learn.  
That's why teachers teach and then they assign homework.

  ----- Original Message ----- 
  From: Mike Tintner 
  To: agi@v2.listbox.com 
  Sent: Thursday, April 10, 2008 8:14 AM
  Subject: Re: [agi] How Bodies of Knowledge Grow


  MW/MT:  Correct me, but I haven't seen any awareness in AI of the huge 
difficulties that result from the problem of : how do you test acquired 
knowledge? 

  MW:You're missing seeing it.  It's generally phrased as "converting data to 
knowledge" or "concept formulation" and it's currently generally envisioned 
more as a problem of how do you do it (acquire knowledge and store it) than how 
do you test that you've been successful at it (since it's tough to test 
something that you don't even know how to do yet).  The AI field is very aware 
of this problem but it's almost a cart before the horse problem.  Once we know 
how to acquire and store knowledge, then we can develop metrics for testing it 
-- but, for now, it's too early to go after the problem.

  This is interesting. I strongly suspect AI has it very wrong. We're 
recognising that perception is not as was once thought fairly passive reception 
of impressions, later checked and corrected by the rational brain,  but active 
exploration and intelligent from the v. beginning. You describe AI, probably 
correctly, as passively registering and then organising facts and only then - 
only it hasn't yet got around to that stage - testing them. Actually, I 
suspect, human knowledge-gathering is from the beginning active and exploratory 
- i.e. we begin with questions about the knowledge we're gathering, (whether in 
a book or in, say conversations or watching a movie), make predictions as we go 
along, and continuously test them. Day-to-day individual knowledge-gathering, 
if this is correct, is v. like the collective knowledge-gathering of science. 
The passive approach seems more logical and simpler, but the active approach is 
in fact much more practical and essential. I would hazard that this might be in 
part because we don't have outside helpers as AI/AGI systems do giving us a 
categorical/filing structure for info. beforehand, but have to create one on 
the fly - and so should a true AGI. Something to do with: "if you teach 
someone, they'll never learn."

  But these are areas which I'm just starting to dip into. Thanks for book 
recommend.

------------------------------------------------------------------------------
        agi | Archives  | Modify Your Subscription  

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com

Reply via email to