Kaj Sotala [EMAIL PROTECTED] 9/29/2007 9:09 AM post was very good.

Regarding the issues discussed in it (and the thread it concerns) I have
the following questions for readers of this list:

1. Should the AGI mailing list be limited only to technical issues, or are
discussions about the social impact of AGI and how to promote AGI
appropriate?

                How should the answer to this question be determined?  By
those who created and/or paid for the AGIRI web site and this mailing
list?  Or should it be by a majority of the people using the list?  (If
so, it seems a lot of them currently want to talk about AGI buzz and
promotion, and, if that is true, it would seem the proper tone and content
of that promotion is relevant.)

                I am somewhat a newbie to mailing lists such as this, so I
don’t know.  I don’t want to violate the rules.  Is the singularity off
topic?  Are issues concerning the realistically possible threats of AGI
off topic, and, if so how doesn’t this disagree with the AGIRI mission
statement?

                                “Mission: Foster the creation of powerful
and ethically positive Artificial General Intelligence.” (Underlining
added)

2.  Can there can be anything close to internet-bubble-scale buzz about
AGI without the popular media fixating on the very issues some demand be
censored?

                We can honestly project initial-level AGI that would be
extremely valuable and would, itself, be quite safe.  But after 37 years
of talking to non-computer people about human level AI, I can tell you, we
cannot mention anything close to human level AI’s without scaring people.
Average Americans may be ignorant and apathetic about many things, but
they are not stupid.  They understand that machines capable of doing most
of what they do are, at least potentially, threatening -- if not to their
lives, at least to their jobs and sense of mankind’s role in the world.
So if we want to promote powerful AGI we should have reasonable answers to
questions of how we are going to avoid, or deal with, its potential
threats.  We might try to sweep such issues under the rug, but they won’t
stay there.

                I think in this case, as in most, honesty is the best
policy.  It is not only the most moral, but also the one most likely to
help promote beneficial AGI that is as powerful as possible.

3.   How many readers of this list think it relevant to discuss the
technical challenges to keeping AGI safe for mankind, i.e., discussing the
“ethically positive” part of the AGIRI mission statement?

                Since the current human-level intelligences on the planet,
i.e., us meat bags, constitute many serious threats to each other, it
seems likely that human-level artificial intelligence could be at least as
threatening, that is, at least, unless we devote some thought to ensuring
they are not.  This is particularly true because human-level AGI’s will
probably be much more effective at communicating with and programming
computers than we, and thus their ability to hack all the other computers
we currently depend our lives on will be much greater than that of the
best human hacking teams.

                And vastly superhuman intelligences will be even more
threatening unless we take care to prevent them from being so.  And, once
we make human level intelligences, it should quickly be very possible to
make superhuman intelligences 10, 100, or a 1000 times more powerful
because intelligent systems, and the cross-sectional bandwidth they demand
will scale relatively well on the type of massively parallel hardware that
can currently be designed, and because machine intelligences can
communicate with each others at vastly superhuman rates, enabling them to
effectively combine their intelligences much better than humans.)

                So it seems rather naïve to not recognize that creating
“powerful and ethically positive Artificial General Intelligence” might
involve challenging technical issues.

                But I have no objection to, and would actually welcome a
diminution of singularity speak from this list, such as of talk about
brain uploads (of which I have been guilty), moon size artificial
intelligences, how natural selection will probably cause superhuman
intelligences to be much nicer than we are to each other (but cause them
to distain us because of our much lower morality), or making the whole
universe into one big AI (by which they presumably mean something far
beyond the amazing computer of physical reality it already is).

                I am proposing that we have discussions such at that
starting half way thru page 204 of Goertzel’s The Hidden Pattern, where he
writes:

                                “A number of contemporary AI theorists,
(e.g., Yudkowski, 2005) argue that logical inference rather than complex,
self organization should be taken as the foundational aspect of AI,
because complex self-organization is inevitably tied to uncontrollability
and unpredictability, and it’s important that the superhuman AI’s we’ll
eventually create should be able to rationally and predictably chart their
own growth and evolution.

                                “I agree that it’s important that powerful
AI’s be more rational than humans, with a greater level of
self-understanding than we humans display.  But, I don’t think the way to
achieve this is to consider logical deduction as the foundational aspect
of intelligence.  Rather, I think one needs complex, self-organizing
system of patterns on the emergent level – and then solve the problem of
how a self-organizing pattern system may learn to rationally control
itself.  I think this is a hard problem but almost surely a solvable one.”

                The portion of this quote I have underlined, matches the
apparent implication in Don Detrich’s recent (9/29/2007 7:24 PM) post that
we can not really understand the threats of powerful AGI until we get
closer to it and, thus, we should delay thinking and talking about such
threats until we learn more about them.

                I would like to know how many other reader of this list
would be interested in discussions such as:

                                (for example, with regard to the above
quoted text:)

                                -Which of Goertzel’s or Yudkowski’s
approaches is most likely to help achieve the goal of creating powerful
and ethical AGI?

                                -How long, as we develop increasingly more
powerful self-organizing systems, would it be safe to delay focusing on
the problem Goertzel refers to of how to make self-organizing pattern
systems rational --where “rational” presumably means rational for mankind?
And how will we know, before its too late, how long is too long?

                                -And what basis does Goertzel have for
saying the problem of how a self-organizing pattern system may learn to
[ethically?] control itself is almost surely a solvable one?

                It seems to me such discussions would both be technically
interesting and extremely valuable for AGIRI mission of fostering
“powerful and ethically positive Artificial General Intelligence.”

                I also think having more reasoned answers to such
questions will actually make promoting AGI funding easier.


Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: Kaj Sotala [mailto:[EMAIL PROTECTED]
Sent: Saturday, September 29, 2007 9:09 AM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content


On 9/29/07, Russell Wallace <[EMAIL PROTECTED]> wrote:
> I've been through the specific arguments at length on lists where
> they're on topic, let me know if you want me to dig up references.

I'd be curious to see these, and I suspect many others would, too. (Even
though they're probably from lists I am on, I haven't followed them nearly
as actively as I could've.)

> I will be more than happy to refrain on this list from further mention
> of my views on the matter - as I have done heretofore. I ask only that
> the other side extend similar courtesy.

I haven't brought up the topics here, myself, but I feel the need to note
that there has been talk about massive advertisements campaigns for
developing AGI, campaigns which, I quote,

On 9/27/07, Don Detrich - PoolDraw <[EMAIL PROTECTED]> wrote:
>However, this
> organization should take a very conservative approach and avoid over
>speculation. The objective is to portray AGI as a difficult but
>imminently  doable technology. AGI is a real technology and a real
>business opportunity.  All talk of Singularity, life extension, the end
>of humanity as we know it  and run amok sci-fi terminators should be
>portrayed as the pure speculation  and fantasy that it is. Think what
>you want to yourself, what investors and  the public want is a useful
>and marketable technology. AGI should be  portrayed as the new
>internet, circa 1995. Our objective is to create some  interest and
>excitement in the general public, and most importantly,  investors.

>From the point of view of those who believe that AGI is real danger, any
campaigns to promote the development of AGI while specificially ignoring
discussion about the potential implications are dangerously irresponsible
(and, in fact, exactly the thing we're working to stop). Personally, I am
ready to stay entirely quiet about the Singularity on this list, since it
is, indeed, off-topical - but that is only for as long as I don't run
across messages which I feel are helping foster an atmosphere that could
lead to humanity's demise.

(As a sidenote - if you really are convinced that any talk about
Singularity is religious nonsense, I don't know if I'd consider it a
courtesy for you not to bring up your views. I'd feel that it would be
more appropriate to debate the matter out, until either you or the
Singularity activists would be persuaded of the other side's point of
view. After all, this is something that people are spending large amounts
of money on (my personal donations to SIAI sum to over a 1000 USD, and are
expected to only go up once I get more money) - if they're wasting their
time and money, they'd deserve to know as soon as possible so they can be
more productive with their attention.)

--
http://www.saunalahti.fi/~tspro1/ | http://xuenay.livejournal.com/

Organizations worth your time:
http://www.singinst.org/ | http://www.crnano.org/ | http://lifeboat.com/

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48187974-d7e845

Reply via email to