Thanks again Richard for continuing to make your view on this topic clear to 
those who are curious.
 
As somebody who has tried in good faith and with limited but nonzero success to 
understand your argument, I have some comments.  They are just observations 
offered with no sarcasm or insult intended.
 
1) The presentations would be a LOT clearer if you did not always start with 
"Suppose that..." and then make up a hypothetical situation.  As a reader I 
don't care about the hypothetical situation, and it is frustrating to be forced 
into trying to figure out if it is somehow a metaphor for what I *am* 
interested in, or what exactly the reason behind it is.  In this case, if you 
are actually talking about a theory of how evolution produced a significant 
chunk of human cognition (a society of CBs), then just say so and lead us to 
the conclusions about the actual world.  If you are not theorizing that the 
evolution/CBs thing is how human minds work, then I do not see the benefit of 
walking down the path.  Note that the basic CB idea you user here strikes me as 
a good one; it resonates with things like Minsky's Society of Mind, as well as 
the intent behind things like Hall's Sigmas and Goertzel's subgraphs.
 
2) Similarly, when you say 
> if we were able to look inside a CB system and see what the CBs are > doing 
> [Note: we can do this, to a limited extent: it is called > "introspection"], 
> we would notice many aspects of CB behavior ...
 
It would be a lot better if you left out the "if" and the "would".  Say "when 
we look inside this CB system..." and "we do notice any aspects..." if that is 
what you mean.  If again this is some sort of strange hypothetical universe as 
a reader I am not very interested in speculations about it.
 
3) When you say
 
> But now, here is a little problem that we have to deal with. It turns > out 
> that the CB system built by evolution was functioning *because* 
> of all that chaotic, organized mayhem, *not* in spite of it.
 
Assuming that you are actually talking about human minds instead of a 
hypothetical universe, this is a very strong statement.  It is a theory about 
human intelligence that needs some support.  It is not necessarily a theory 
about "intelligence-in-general"; linking it to intelligence in general would be 
another theory requring support.  You may or may not think that "intelligence 
in general" is a coherent concept; given your recent statements that there can 
be no formal definition of intelligence, it's hard to say whether 
"intelligence" that is not isomorphic to human intelligence can exist in your 
view.
 
4) Regarding:
 
> Evolution explored the space of possible intelligent mechanisms. In the 
> course 
> of doing so, it discovered a class of systems that work, but it may well be > 
> that the ONLY systems in the whole universe that can function as well as > a 
> human intelligence involve a small percentage of weirdness that just > 
> balances out to make the system work. There may be no cleaned-up > versions 
> that work.
 
The natural response is:  sure, this "may well be", but it just as easily "may 
well not be".  This is addressed in your concluding points, which say that it 
is not definite, but is very likely.  As a reader, I do not see a reason to 
suppose that this is true.  You offer only the circumstantial evidence that AI 
has failed for 50 years, but there are many other possible reasons for this:
 
- maybe it's just hard.  many aspects of the universe took more than 50 years 
to understand, many are still not understood.  i personally think that if this 
is true we are unlikely to be just a few years from the solution, but it does 
seem like a reasonable viewpoint.
 
- maybe "logic" just stinks as a tool for modeling the world.  it seemed 
natural but looking at the things and processes in the human universe logically 
seems like a pretty poor idea to me.  maybe "probabilistic logic" of one sort 
or another will help.  but the point here is that it might not be a complex 
systems issue, it might just be a knowledge representation and reasoning issue. 
 perhaps generated or evolved "program fragments" will fare better; perhaps 
things that look like "neural clusters" will work, perhaps we haven't 
discovered a good way to model the universe yet.
 
- maybe we haven't ripped the kitchen sink out of the wall yet... maybe 
"intelligence" will turn out to be a conglomeration of 1543 different 
representation schemes and reasoning tricks, but we've only put a fraction 
together so far and therefore only covered a small section of what intelligence 
needs to do.
 
5) Of course, the argument would be strengthened by a somewhat detailed 
suggestion of how AI research *should* proceed; you give some arguments for why 
certain (unspecified) approaches *might* not work, but nothing beyond the 
barest hint of what to do about it, which doesn't motivate anybody to give much 
more than a shrug to your comments.  I wonder what it is that you expect people 
to do in response to this argument which offers only criticism, and that 
criticism not even aimed at any specific approach.
 
I know that responding to long messages like this can be time consuming, and 
don't feel like you need to; it does seem that for whatever reason most of the 
readers at least on these mailing lists continue to "not get it"; if you care 
why that is, this message is only intended as a data point -- why *I* don't get 
it.


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to