On 10/24/07, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Every speculation on this board about the nature of future AGI's has been
> pure fantasy. Even those which try to dress themselves up in some semblance
> of scientific reasoning. All this speculation, for example, about the
> friendliness and emotions of future AGI's has been non-sense - and often
> from surprisingly intelligent people.
>
> Why? Because until we have a machine that even begins to qualify as an AGI -
> that has the LEAST higher adaptivity - until IOW AGI's EXIST- we can't begin
> seriously to predict how they will evolve, let alone whether they will "take
> off." And until we've seen a machine that actually has functioning emotions
> and what purpose they serve, ditto we can't predict their future emotions.

We can't predict *exactly* what an AGI will do, but we can point out a
few obvious possibilities, like the AGI destroying the human species.

> So how can you cure yourself if you have this apparently incorrigible need
> to produce speculative fantasies with no scientific basis in reality
> whatsoever?
>
> I suggest : first speculate about the following:
>
> what will be the next stage of HUMAN evolution? What will be the next
> significant advance in the form of the human species - as significant, say,
> as the advance from apes, or - ok - some earlier form like Neanderthals?

Evolution is obsolete. A decade is not even a single clock tick of
evolutionary time. A decade is an *eternity* in software- ten years
ago, HTML was The Latest Hot New Thing that most people had just
started to use.

> Hey, if you are prepared to speculate about fabulous future AGI's,
> predicting that relatively small evolutionary advance shouldn't be too hard.
> But I suggest that if you do think about future human evolution your mind
> will start clamming up. Why? Because you will have a sense of physical/
> evolutionary constraints (unlike AGI where people seem to have zero sense of
> technological constraints), - an implicit recognition that any future human
> form will have to evolve from the present form  - and to make predictions,
> you will have to explain how.

AGIs do not follow the same constraints as evolved organisms, because
AGIs are not evolved- they are designed by us. If we want to scrap the
architecture and start over, we can. If evolution wants to scrap the
architecture- sorry, no luck, it's already in place and you're stuck
with it.

> And you will know that anything you say may
> only serve to make an ass of yourself. So any prediction you make will have
> to have SOME basis in reality and not just in science fiction. The same
> should be true here.
>
>
>
>
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

 - Tom

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=57199801-5d2c4e

Reply via email to