For sanity's sake I greatly appreciate the indiscriminate application of the 
system to both self and world. I am not completely convinced this is how humans 
function as we definitely have certain delf-oriented instincts, but it will be 
interesting with this more "objecective" structure what kind of behaviors 
emerge. As for the "others" argument, I think it depends on how much we want to 
preteach it. If we truly wanted it to rely on structures gathered from pattern 
matching, then we should not discriminate between others and world, heck we 
don't even really need to distinguish between between self and world at least 
as far as design philosophy is concerned. 

Sure at first with simple systems, its "self" is easy to distinguish, its body, 
its host machine, etc. But as we drive integration, the lines could easily 
blur, especially as it develops new methods of interacting with the world. E. 
g. if it matched patterns in markets and learned to create accounts to link 
into market APIs which it directly interacted with, then one could argue that 
that integration system it developed and uses is a part of its "self." If it 
were able to match market patterns well enough to get high accuracy and 
control, that could arguably be considered an extension of "self." 

With a system built on the concepts opencogprime is, yes there is an incredible 
cleanliness and simplicity to it, but a lot of our human concepts don't apply 
cleanly. 

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to opencog+unsubscr...@googlegroups.com.
To post to this group, send email to opencog@googlegroups.com.
Visit this group at https://groups.google.com/group/opencog.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/d9638368-00e7-4eca-b307-dd625b1f7267%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to