Interesting conversation.  I wanted to suggest something about how an AGI
might be qualitatively different from human.  One possible difference
could be an overriding thoroughness.  People generally don't put in the
effort to consider all the possibilities in the decisions they make, but
computers and computer programs are just easier at looking at things
thoroughly, because they don't get tired.  Deep Blue vs. Kasparov comes to
mind, I guess, but chess programs in general in there just relentless
thoroughness.  They don't make the kinds of simple mistakes the people, or
at least I for one, do.  And another example is that unusual proof of the
equilateral triangles.  It was the kind of thing people wouldn't think of,
but if you go through all combinations, you might see.  So they might tend
to find nifty hidden little possibilities, that wouldn't normally occur to
people who tend to focus with certain viewpoints.

Of course, people get a lot out of making little mistakes and finding
unexpected bonuses.  I would expect that an AGI would need to have an
ability to try things that aren't clearly correct, but whereas people do
them by accident, it might be that an AGI considers the consequences
first, but decides to interject some randomness for some reason
"intentionally".  This also might be a qualitative difference.

And a tendency to have a lot less carelessness and variation when desired
seems to be quite different from a human type of intelligence, but this
leads to an interesting question.  Isn't the imperfection and variation
central to what we mean by intelligence?  Because for any question, you
can find good answers, and bad answers, and sometimes brilliant answers
that at first look like bad answers.

And I've said it before, but it bears repeating in this context.  Real
intelligence requires that mistakes be made.  And that's at odds with
regular programming, because you are trying to write programs that don't
make mistakes, so I have to wonder how serious people really would be
about pursuing intelligence in a machine if they really knew what's
involved.
andi



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=108809214-a0d121
Powered by Listbox: http://www.listbox.com

Reply via email to