Yes, I thought we disagreed.
To be clear: I'm saying - no society and culture, no individual intelligence.
The individual is part of a complex - & in the human case - VAST social web.
(How ironic, Ben, that you could be asserting your position while totally
embedded in the greatest social web ever - the Net. Your whole work depends on
the Web and speaks to it).
Tom McCabe expresses another dimension of the "isolated individual" position.
He can sit down and work out prime nos. from 300-400 with pencil/paper all by
himself apparently - only it's with a system of maths that took thousands of
years for our society to develop, and millions if not billions of years for
human/animal society to initiate/evolve, and a pencil and paper that are also
the products of millions of years of human society, on a desk and in a room
that are provided to him and continually supported and heated, lighted etc and
with a body that is fed and watered by an extremely complex society. But no,
he, you are truly isolated, individuals. "Get over yourself" guys.
(And of course, all our acts of intelligence, whether we are directly aware of
it or not, are acts of social communication and exchange. You, Ben, are doing
AGI because you think it will help as well as sell to society and only able to
practice with the aid of teams of other people).
And Tom cues me in perfectly with his reference to Evolutionary Psychology.
That is the perfect example of totally skewed, "isolated individual" thinking.
Scientific, evolutionary thinking has been parallel to your AI/AGI bias. It
thought/thinks that a self-interested individual would be selfish and not
altruistic. Animal and human altruism could only be explained by an appeal to
the interest of their genes in their self-preservation and -evolution.
Actually, extreme selfishness is not smart at all, precisely because all of us
individual animals depend for our survival on our relationships with our
society - reciprocity & fairness of exchange together with cooperation are
very sensible, rewarding and essential behaviour. And altruism is just as deep
and fundamental an instinct as egotism - as anyone other than near-autistic
scientists should be able to see. "No man is an island.")..
POINT 2: Our equally fundamental disagreement is about the "nature of the
reality" that any AGI or any human or any animal must deal with. Let me define
it - since I rather than you am really asserting the opposite position here -
it isn't so much "chaotic" as "crazy, and mixed up" as opposed to "rational and
consistent."
Narrow AI deals with straightforward problems - rational, consistent problems
that can be solved in rational, consistent ways, even though they may involve
degrees of uncertainty and demand cycling (algorithmically/systematically)
through different approaches.
AGI must deal with problematic problems - crazy, (i.e. non-rational) mixed up
problems that can only be solved in crazy, mixed up ways, where you are not
just uncertain but fundamentally confused, (and should be so lucky as to have a
neat algorithm), and have to patch together solutions by "groping" often
blindly for ideas..
(The "crazy, (non-rational), mixed up" nature of the world - the fact that
Richard can be friendly one day, & aggressive the next, & neither you nor he
know when he will be which, or quite how to deal with him - - is as deep and
fundamental an attribute as "chaos"/complexity).
You can only assert the possibility of an essentially rational AGI because, I
suggest, you are living in a virtual, structured world. The real,
ill-structured world - along with every single activity humans and animals
engage in - isn't like that.
Ben:
MT:No AGI or agent can truly survive and thrive in the real world, if it is
not similarly part of a collective society and a collective science and
technology - and that is because the problems we face are so-o-o problematic.
Correct me, but my impression of all the discussion here is that it assumes
some variation of the classic science fiction scenario, pace 2001/ The Power
etc where an individual computer takes power, if not takes off by itself. Ain't
gonna happen - no isolated individual can truly be intelligent.
Just to be clear -- I don't agree with this ... I think it's an undue
projection of the particular nature of human intelligence onto the domain of
nonhuman minds.
A superhuman AI could be in essence a "culture unto itself", not requiring a
society to maintain a culture as humans do.
This certainly doesn't require that said AI be able to predict the weather
and otherwise get around the chaotic, unpredictable nature of physical
reality...
-- Ben G
------------------------------------------------------------------------------
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&
------------------------------------------------------------------------------
No virus found in this incoming message.
Checked by AVG Free Edition.
Version: 7.5.503 / Virus Database: 269.15.12/1098 - Release Date: 10/29/2007
9:28 AM
-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=58992169-6d4028