On Oct 30, 2007 7:17 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
>
>
> Yes, I thought we disagreed.
>
> To be clear: I'm saying - no society and culture, no individual
> intelligence. The individual is part of a complex - & in the human case -
> VAST social web. (How ironic, Ben, that you could be asserting your position
> while totally embedded in the greatest social web ever - the Net. Your whole
> work depends on the Web and speaks to it).

Just because something is useful, does not mean that it is necessary
for general intelligence. I am, after all, typing this on a keyboard;
yet somehow, nobody argues that without a keyboard I wouldn't be
intelligent.

> Tom McCabe expresses another dimension of the "isolated individual"
> position.  He can sit down and work out prime nos. from 300-400 with
> pencil/paper all by himself apparently - only it's with a system of maths
> that took thousands of years for our society to develop,

Okay, let's stick me in ancient Mesopotamia, where nobody knew what a
prime number was. I can still- independent of anyone else- solve the
various engineering problems required to build an irrigation system.
If that's not far enough back, chimpanzees can use logical reasoning
chains to work out which closed box contains a banana.

> and millions if not
> billions of years for human/animal society to initiate/evolve,

Humans need this, because our brains run at 200 Hz, and our
communication mechanisms run at 300 baud. AGIs will not. If you want
to demonstrate that "AGIs will need XYZ", you must show it for *all*
intelligences, not just evolved intelligences, and *certainly* not
just humans!

> and a pencil
> and paper that are also the products of millions of years of human society,
> on a desk and in a room that are provided to him and continually supported
> and heated, lighted etc and with a body that is fed and watered by an
> extremely complex society.

You do realize that none of this is *necessary*, and I could still do
the same work even if you put me down naked in the middle of the
Canadian wilderness?

> But no, he, you are truly isolated, individuals.
> "Get over yourself" guys.

Have you visited North Korea recently? That's what their philosophy is
based on- the supremacy of the society over the individual (although
distorted somewhat by Dear Leader's personality cult, see the later
Soviet Union for a purer example). It doesn't make for a very pleasant
place to live.

> (And of course, all our acts of intelligence, whether we are directly aware
> of it or not, are acts of social communication and exchange. You, Ben, are
> doing AGI because you think it will help as well as sell to society and only
> able to practice with the aid of teams of other people).

*I* talk about AGI because it will be powerful enough to make our
entire societal infrastructure obsolete. This has already happened
before- when the human species developed general intelligence, we made
all the biological infrastructure obsolete.

> And Tom cues me in perfectly with his reference to Evolutionary Psychology.
> That is the perfect example of totally skewed, "isolated individual"
> thinking.

Please go RTFM. Evolutionary psychology is a huge field, with an
enormous mountain of evidence supporting a large number of
established, textbook-level conclusions. Dismissing it is as absurd as
dismissing the Big Bang theory of the universe's origin. You'll
probably be postulating group selection next, so I'll pre-empt you; it
has already been debunked every which way from Sunday.

> Scientific, evolutionary thinking has been parallel to your AI/AGI
> bias.

To clarify, are you dismissing the entire idea of "scientific thinking"?

> It thought/thinks that a self-interested individual would be selfish
> and not altruistic.

Was already explained, forty years ago. Go read The Selfish Gene.
Nobody in evolutionary theory predicts that animals should be totally
selfish.

> Animal and human altruism could only be explained by an
> appeal to the interest of their genes in their self-preservation and
> -evolution. Actually, extreme selfishness is not smart at all, precisely
> because all of us individual animals depend for our survival on our
> relationships with our society -   reciprocity &  fairness of exchange
> together with cooperation are very sensible, rewarding and essential
> behaviour. And altruism is just as deep and fundamental an instinct as
> egotism - as anyone other than near-autistic scientists should be able to
> see. "No man is an island.")..

It *is* true that altruism is a strong human instinct, but what does
this have to do with AGI?

> POINT 2:  Our equally fundamental disagreement is about the "nature of the
> reality" that any AGI or any human or any animal must deal with. Let me
> define it - since I rather than you am really asserting the opposite
> position here - it isn't so much "chaotic" as "crazy, and mixed up" as
> opposed to "rational and consistent."
>
> Narrow AI deals with straightforward problems - rational, consistent
> problems that can be solved in rational, consistent ways, even though they
> may involve degrees of uncertainty and demand cycling
> (algorithmically/systematically) through different approaches.
>
> AGI must deal with problematic problems - crazy, (i.e. non-rational)

Rationality is used to describe minds, not problems. Saying that a
problem is "non-rational" isn't even wrong; it's analogous to saying
that blue is circular.

> mixed
> up problems that can only be solved in crazy, mixed up ways, where you are
> not just uncertain but fundamentally confused, (and should be so lucky as to
> have a neat algorithm), and have to patch together solutions by "groping"
> often blindly for ideas..

You are mixing up difficulty and complexity with impossibility. Yes,
an AGI will have to deal with real world problems. Yes, an AGI will
have to use tricks and shortcuts rather than exact solutions because
of real world complexity. No, this does not mean super-powerful AGI is
impossible, or even exceptionally difficult.

> (The "crazy, (non-rational), mixed up" nature of the world - the fact that
> Richard can be friendly one day, & aggressive the next, & neither you nor he
> know when he will be which, or quite how to deal with him  - - is as deep
> and fundamental an attribute as "chaos"/complexity).

Richard's behavior looks like it's irrational on the surface, if you
know nothing about the underlying complexity of his experiences. It
may be rational if Richard has good reasons for acting in this way.

> You can only assert the possibility of an essentially rational AGI because,
> I suggest, you are living in a virtual, structured world. The real,
> ill-structured world - along with every single activity humans and animals
> engage in - isn't like that.

If a bathtub full of water is at 50 C, and I believe that it is at 50
C, my mind is operating in a rational manner. If I don't know how it
was heated to 50 C, and I know I don't know, my mind is still
operating in a rational manner- there are words for "I don't know" in
the language of rationality (they happen to be "maximum entropy
probability distribution").

>
>
> Ben:
>
>
> >
> >
> > MT:No AGI or agent can truly survive and thrive in the real world, if it
> is not similarly part of a collective society and a collective science and
> technology - and that is because the problems we face are so-o-o
> problematic. Correct me, but my impression of all the discussion here is
> that it assumes some variation of the classic science fiction scenario, pace
> 2001/ The Power etc where an individual computer takes power, if not takes
> off by itself. Ain't gonna happen - no isolated individual can truly be
> intelligent.
>
>
>
> Just to be clear -- I don't agree with this ... I think it's an undue
> projection of the particular nature of human intelligence onto the domain of
> nonhuman minds.
>
> A superhuman AI could be in essence a "culture unto itself", not requiring a
> society to maintain a culture as humans do.
>
> This certainly doesn't require that said AI be able to predict the weather
> and otherwise get around the chaotic, unpredictable nature of physical
> reality...
>
> -- Ben G
>  ________________________________
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
>  ________________________________
>
>
> No virus found in this incoming message.
> Checked by AVG Free Edition.
> Version: 7.5.503 / Virus Database: 269.15.12/1098 - Release Date: 10/29/2007
> 9:28 AM
> ________________________________
>  This list is sponsored by AGIRI: http://www.agiri.org/email
>
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=59199610-81e25c

Reply via email to