Ed Porter writes:
 
> How the concept of “knight” poofs into existence during a conversation 
> about chess is no great mystery for a Novamente-like system.  If a 
> Novamente has former experience which chess they have, within their 
> hierarchical memory recorded patterns and experiences with chess 
> knights, and links between them and the representation for “knight.” To me 
> this just reads that the concept of "knight" poofs into existence because it 
> was already there.  The interesting question is not how the audible sound 
> "knight" connects to an existing concept, it is where the concept came from 
> -- specifically.  In the scenario where somebody verbally explains chess 
> there are no prior sensory experiences with knights to draw from... but that 
> is not the central point I was trying to get at.  For me it is not quite 
> enough to say that somewhere in a vaguely-described hierarchical memory there 
> will be some unspecified patterns that correspond in some unclear way to 
> chess knights, and that these representations will through some method I 
> don't fully understand get clustered into something that will do what a 
> "knight" concept should do (although we can't say for sure exactly what that 
> is).
 
Note that the flaw here is with my understanding -- because I cannot "see" 
exactly how these things would happen and work for a specific case, I can't 
conclude for myself that they would do so, however brilliant Ben is and however 
fascinating on a general level his ideas are (and they are extremely 
interesting to me).
 
You seem to be looking for a disproof, but you won't find one... you can't 
disprove something about a system that is not fully understood.  Similarly that 
is the reason I'm questioning the forcefulness of your belief.  You seem to 
have drawn conclusions about the technical capabilities of a system given only 
a sketchy English description of it.  To me that's a leap of faith.
 
Note that I am not criticizing Novamente; I think it's the most interesting AGI 
system out there and it has a chance of succeeding.
> Regarding the sufficiencly of truth values, Novamente also 
> uses importance values, which are just as important as truth values. 
Yes, that's true, I should have written:  [I] have some concerns about things 
like whether propagating truth+importance values around is really a very 
effective modeling substrate for the world of objects and ideas we live in [...]
 
The usual response to questions about Novamente's capabilities seems to be to 
say "it can do that, but the method hasn't been described yet" or "ok, but all 
we have to do is add [neural gas, or whatever] to it and hook it up, and then 
we're good to go.
 
I hope those things are true and look forward to seeing the engineering play 
out.  But you can't blame people for retaining a "we'll see" attitude at the 
present time, I think.
 

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to