Re: [singularity] Defining the Singularity

2006-10-26 Thread Starglider
Matt Mahoney wrote:
 'Access to' isn't the same thing as 'augmented with' of course, but I'm
 not sure exactly what you mean by this (and I'd rather wait for you to
 explain than guess).
 
 I was referring to one possible implementation of AGI consisting of part 
 neural
 or brainlike implementation and part conventional computer (or network)
 to combine the strengths of both.

I'm sure that a design like this is possible, and there are quite a few
people trying to build AGIs like this, either with close integration
between the connectionist and code-like parts or having them as relavtively
discrete but communicating parts. Yes it should be more powerful than
connectionism on its own, no it's not necessarily any more Friendly, but
any kind of hard structural constraints (what can trigger what, what can
modify what) can be reliably enforced via the non-connectionist elements
then it has the potential to be more Friendly than a connectionist system
could be.

What I'm not sure about is that you gain anything from 'neural' or
'brainlike' elements at all. The brain should not be put on a pedestal.
It's just what evolution on earth happened to come up with, blindly
following incremental paths and further hobbled by all kinds of cruft and
design constraints. There's no a priori reason to believe that the brain is
a /good/ way to do anything, given hardware that can execute arbitrary
Turing-equivalent code. Of course it's still pragmatic to try copying the
brain when we can't think of anything better (i.e. don't have the
theoretical basis or tools to do better than attempt crude immitations).
As with rational AGI (and FAI) in general, I don't expect people (who
haven't deeply studied it and tried to build these systems) to accept that
this is true, just that it might be true; there may be much more efficient
algorithms that effectively outperform connectionism in all cases.
Getting some confirmation (or otherwise) of that is something that is one
of the things I'm working on at present.

 The architecture of this system would be that the neural part has the
 capability to write programs and run them on the conventional part in
 the same way that humans interact with computers.

Neural nets are a really bad fit with code design. Current ANNs aren't
generally capable of from-requirements design anyway, as opposed to pattern
recognition and completion. Writing code involves juggling lots of logical
constraints and boolean conditions, so it's actually one of the few real
world tasks that is a natural fit with predicate logic. This is why humans
currently use high-level languages and error-checking compilers. You could
of course use a connectionist system as the control mechanism to direct
inference in a logic system, in a roughly analogous manner.

 This seems to me to be the most logical way to build an AGI, and
 probably the most dangerous

I'd agree that it looks good when you first start attacking the problem.
Classic ANNs have some demonstrated competencies, classic symbolic
AI has some different demonstrated competencies, as do humans and
existing non-AI software. I was all for hybridising various forms of 
connectionism, fuzzy symbolic logic, genetic algorithms and more at one
point. It was only later that I began to realise that most if not all of
those mechanisms were neither optimal, adequate or even all that useful.
Most dangerous, perhaps, in that highly hybridised systems that overcome
the representational communication barrier between their subcomponents
are probably unusually prone to early takeoff. It's easy to proceed without
really understanding what you're doing if you take the 'kitchen sink'
approach of tossing in everything that looks useful (letting the AI sort
out how to actually use it). Not all integrative projects are like that,
but quite a few are, and yes they are dangerous.

 I believe that less interaction means less monitoring and control, and
 therefore greater possibility that something will go wrong.

Plus humans in the decision loop inherently slow things down greatly
compared to an autonomous intelligence running at electronic speeds.

 As long as human brains remain an essential component of a superhuman
 intelligence, it seems less likely that this combined intelligence will
 destroy itself.

Probably true, but 'destroy itself' is a minor and recoverable failure
scenario unless the intelligence takes a good chunk of the scenery with
it. It's the 'start restructuring everything in reach according to a
non-Friendly goal system' outcome that's the real problem.

 If AGI is external or independent of human existence, then there is a
 great risk.  But if you follow the work of people trying to develop AGI,
 it seems that is where we are headed, if they are successful.

It's inevitable. Someone is going to build one eventually. The only
useful argument is 'we should develop intelligence enhancement first,
so that we have a better chance of getting AGI right'. You can go and
research 

Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-26 Thread Kaj Sotala

On 9/24/06, Ben Goertzel [EMAIL PROTECTED] wrote:

Anyway, I am curious if anyone would like to share experiences they've
had trying to get Singularitarian concepts across to ordinary (but
let's assume college-educated) Joes out there.  Successful experiences
are valued but also unsuccessful ones.  I'm specifically interested in


Personally, I've noticed that the opposition to a thought of
Singularity falls into two main camps:

1) Sure, we might get human-equivalent hardware in the near future,
but we're still nowhere near having the software for true AI.

2) We might get a Singularity within our lifetimes, but it's just as
likely to be a rather soft takeoff and thus not really *that* big of
an issue - life-changing, sure, but not substantially different from
the development of technology so far.

The difficulty with arguing against point 1 is that, well, I don't
know all that much that'd support me in arguing against it. I've had
some limited success with quoting Kurzweil's brain scanning
resolution is constantly getting better graph and pointing out that
we'll become able of doing a brute-force simulation at some point, but
as for anything more elegant, not much luck.

Moore's Law seems to work somewhat against point 2, but people often
question how long we can assume it to hold.


approaches, metaphors, focii and so forth that have actually proved
successful at waking non-nerd, non-SF-maniac human beings up to the
idea that this idea of a coming Singularity is not **completely**
absurd...


Myself, I've recently taken a liking to the Venus flytrap metaphor I
stole from Robert Freitas' Xenopsychology. To quote my in-the-works
introductory essay to the Singularity (yes, it seems to be
in-the-works indefinitely - short spurts of progress, after which I
can't be bothered to touch it for months at a time):

In his 1984 paper Xenopsychology [3], Robert Freitas introduces the
concept of Sentience Quotient for determining a mind's intellect. It
is based on the size of the brain's neurons and their
information-processing capability. The dumbest possible brain would
have a single neuron massing as much as the entire universe and
require a time equal to the age of the universe to process one bit,
giving it an SQ of -70. The smartest possible brain allowed by the
laws of physics, on the other hand, would have an SQ of +50. While
this only reflects pure processing capability and doesn't take into
account the software running on the brains, it's still a useful rough
guideline.

So what's this have to do with artificial intelligences? Well, Freitas
estimates Venus flytraps to have an SQ of +1, while most plants have
an SQ of around -2. The SQ for humans is estimated at +13. Freitas
estimates electronic sentiences that can be built to have an SQ of +23
- making the difference of us and advanced AIs inearly as high as
between humans and Venus flytraps/i. It should be obvious that when
compared to this, even the smartest humans would stand no chance
against the AI's intellect - any more than we should be afraid of a
genius carnivorous plant suddenly developing a working plan for taking
over all of humanity.

http://www.saunalahti.fi/~tspro1/Esitys/009.png has the same
compressed in a catchy presentation slide (some of the text is in
Finnish, but you ought to get the gist of it anyway).

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [singularity] Defining the Singularity

2006-10-26 Thread Ben Goertzel

HI,

About hybrid/integrative architecturs, Michael Wilson said:

I'd agree that it looks good when you first start attacking the problem.
Classic ANNs have some demonstrated competencies, classic symbolic
AI has some different demonstrated competencies, as do humans and
existing non-AI software. I was all for hybridising various forms of
connectionism, fuzzy symbolic logic, genetic algorithms and more at one
point. It was only later that I began to realise that most if not all of
those mechanisms were neither optimal, adequate or even all that useful.


My own experience was along similar lines.

The Webmind AI Engine that I worked on in the late 90's was a hybrid
architecture, that incorporated learning/reasoning/etc. agents based
on a variety of existing AI methods, moderately lightly customized.

On the other hand, the various cognitive mechanisms in Novamente
mostly had their roots in standard AI techniques, but have been
modified, customized and re-thought so far that they are really
fundamentally different things by now.

So I did find that even when a standard narrow-AI technique sounds on
the surface like it should be good at playing some role within an AGI
architecture, in practice it generally doesn't work out that way.
Often there is **something vaguely like** that narrow-AI technique
that makes sense in an AGI architecture, but the path from the
narrow-AI method to the AGI-friendly relative can require years of
theoretical and experimental effort.

An example is the path from evolutionary learning to probabilistic
evolutionary learning of the type we've designed for Novamente (which
is hinted at in Moshe Looks' thesis work at www.metacog.org; but even
that stuff is only halfway there to the kind of prob. ev. learning
needed for Novamente AGI purposes; it hits some of the key points but
leaves some important things out too.  But a key point is that by
using probabilistic methods effectively it opens the door for deep
integration of evolutionary learning and probabilistic reasoning,
which is not really possible with standard evolutionary techniques...)

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-26 Thread Matt Mahoney
I found more on Freitas' SQ
http://en.wikipedia.org/wiki/Sentience_Quotient

The ratio of the highest and lowest values, 10^120 depends only on Planck's 
constant h, the speed of light c, the gravitational constant G, and the age of 
the universe, T (which is related to the size and mass of the universe by c and 
G).  This number is also the quantum mechanical limit on the entropy of the 
universe, or the largest memory you could build, about 10^120 bits.  Let me 
call this number H.  A more precise calculation shows

h = 1.054e-34 Kg m^2/s  (actually h-bar)
c = 3.00e8 m/s
G = 6.673e-11 Kg m^3/s^2
T = 4.32e17 s (13.7 billion years)
H = hG/(c^5 T^2) = 1.55e122 (unitless)

although I am probably neglecting some small but important constants due to my 
crude attempt at physics.  I derived H by nothing more than cancelling out 
units.

If this memory filled the universe (and it would have to), then each bit would 
occupy about the space of a proton or neutron.  This is quite a coincidence, 
since h, G,  c, and T do not depend on the physical properties of any 
particles.  The actual number of baryons (protons and neutrons and possibly 
their antiparticles) in the universe is about H^(2/3) ~ 10^80.  If the universe 
was mashed flat, it would form a sheet of neutrons one particle thick.

Another possible coincidence is that H could be related to the fine structure 
constant alpha = 1/137.0359997... by H ~ e^2/alpha ~ 10^119.  If this could be 
confirmed, it would be significant because alpha is known to about 9 
significant digits.  Alpha is unitless and depends on h, c, and the unit 
quantum electric charge.
http://en.wikipedia.org/wiki/Fine_structure_constant
 
-- Matt Mahoney, [EMAIL PROTECTED]

- Original Message 
From: Kaj Sotala [EMAIL PROTECTED]
To: singularity@v2.listbox.com
Sent: Thursday, October 26, 2006 9:46:55 AM
Subject: Re: [singularity] Convincing non-techie skeptics that the Singularity 
isn't total bunk

On 9/24/06, Ben Goertzel [EMAIL PROTECTED] wrote:
 Anyway, I am curious if anyone would like to share experiences they've
 had trying to get Singularitarian concepts across to ordinary (but
 let's assume college-educated) Joes out there.  Successful experiences
 are valued but also unsuccessful ones.  I'm specifically interested in

Personally, I've noticed that the opposition to a thought of
Singularity falls into two main camps:

1) Sure, we might get human-equivalent hardware in the near future,
but we're still nowhere near having the software for true AI.

2) We might get a Singularity within our lifetimes, but it's just as
likely to be a rather soft takeoff and thus not really *that* big of
an issue - life-changing, sure, but not substantially different from
the development of technology so far.

The difficulty with arguing against point 1 is that, well, I don't
know all that much that'd support me in arguing against it. I've had
some limited success with quoting Kurzweil's brain scanning
resolution is constantly getting better graph and pointing out that
we'll become able of doing a brute-force simulation at some point, but
as for anything more elegant, not much luck.

Moore's Law seems to work somewhat against point 2, but people often
question how long we can assume it to hold.

 approaches, metaphors, focii and so forth that have actually proved
 successful at waking non-nerd, non-SF-maniac human beings up to the
 idea that this idea of a coming Singularity is not **completely**
 absurd...

Myself, I've recently taken a liking to the Venus flytrap metaphor I
stole from Robert Freitas' Xenopsychology. To quote my in-the-works
introductory essay to the Singularity (yes, it seems to be
in-the-works indefinitely - short spurts of progress, after which I
can't be bothered to touch it for months at a time):

In his 1984 paper Xenopsychology [3], Robert Freitas introduces the
concept of Sentience Quotient for determining a mind's intellect. It
is based on the size of the brain's neurons and their
information-processing capability. The dumbest possible brain would
have a single neuron massing as much as the entire universe and
require a time equal to the age of the universe to process one bit,
giving it an SQ of -70. The smartest possible brain allowed by the
laws of physics, on the other hand, would have an SQ of +50. While
this only reflects pure processing capability and doesn't take into
account the software running on the brains, it's still a useful rough
guideline.

So what's this have to do with artificial intelligences? Well, Freitas
estimates Venus flytraps to have an SQ of +1, while most plants have
an SQ of around -2. The SQ for humans is estimated at +13. Freitas
estimates electronic sentiences that can be built to have an SQ of +23
- making the difference of us and advanced AIs inearly as high as
between humans and Venus flytraps/i. It should be obvious that when
compared to this, even the smartest humans would stand no chance
against the AI's intellect - any more than we 

Re: [singularity] Defining the Singularity

2006-10-26 Thread Richard Loosemore

Matt Mahoney wrote:

- Original Message 
From: Starglider [EMAIL PROTECTED]
To: singularity@v2.listbox.com
Sent: Thursday, October 26, 2006 4:21:45 AM
Subject: Re: [singularity] Defining the Singularity


What I'm not sure about is that you gain anything from 'neural' or
'brainlike' elements at all. The brain should not be put on a pedestal.


I think you're right.  A good example is natural language.  Neural networks are 
poor at symbolic processing.  Humans process about 10^9 bits of information 
from language during a lifetime, which means the language areas of the brain 
must use thousands of synapses per bit.


Neural networks are *not* poor at symbolic processing:  you just used 
the one inside your head to do some symbolic processing.


And perhaps brains are so incredibly well designed, that they have 
enough synapses for thousands of times the number of bits that a 
language user typically sees in a lifetime, because they are using some 
of those other synapses to actually process the language, maybe?


Like, you know, rather than just use up all the available processing 
hardware to store language information and then realize that there was 
nothing left over to actually use the stored information  which is 
presumably what a novice AI programmer would do.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [singularity] Defining the Singularity

2006-10-26 Thread LĂșcio de Souza Coelho

On 10/26/06, deering [EMAIL PROTECTED] wrote:
(...)

The only rational thing to do is to build an SAI without any preconceived
ideas of right and wrong, and let it figure it out for itself.  What makes
you think that protecting humanity is the greatest good in the universe?

(...)

Hundreds of thousands of years of evolution selecting humans that like
humans (or at least part of them). And before that, billions of years
of similar selective pressures on various evolutionary ancestors.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-26 Thread Matt Mahoney
I have raised the possibility that a SAI (including a provably friendly one, if that's possible) might destroy all life on earth.By friendly, I mean doing what we tell it to do. Let's assume a best case scenario where all humans cooperate, so we don't ask, for example, for the SAI to kill or harm others. So under this scenario the SAI figures out how to end disease and suffering, make us immortal, make us smarter and give us a richer environment with more senses and more control, and give us anything we ask for. These are good things, right? So we achieve this by uploading our minds into super powerful computers, part of a vast network with millions of sensors and effectors around the world. The SAI does pre- and postprocessing on this I/O, so it effectively can simulate any
 enviroment if we want it to. If you don't like the world as it is, you can have it simulate a better one.And by the way, there's no more need for living organisms to make all this run, is there? Brain scanning is easier if you don't have to keep the patient alive. Don't worry, no data is lost. At least no important data. You don't really need all those low level sensory processing and motor skills you learned over a lifetime. That was only useful when you still had your body. And while were at it, we can alter your memories if you like. Had a troubled childhood? How about a new one?Of course there are the other scenarios, where the SAI is not proven friendly, or humans don't cooperate...Vinge describes the singularity as the end of the human era. I think your nervousness is justified.-- Matt Mahoney, [EMAIL PROTECTED]- Original Message From: deering [EMAIL PROTECTED]To: singularity@v2.listbox.comSent: Thursday, October 26, 2006 7:56:06 PMSubject: Re: [singularity] Defining the Singularity

 
 


All this talk about trying to make a SAI Friendly 
makes me very nervous. You're giving a superhumanly powerful being a set 
of motivations without an underlying rationale. That's a 
religion.

The only rational thing to do is to build an SAI 
without any preconceived ideas of right and wrong, and let it figure it out for 
itself. What makes you think that protecting humanity is the greatest good 
in the universe?


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-26 Thread Kaj Sotala

On 10/27/06, deering [EMAIL PROTECTED] wrote:

All this talk about trying to make a SAI Friendly makes me very nervous.
You're giving a superhumanly powerful being a set of motivations without an
underlying rationale.  That's a religion.

The only rational thing to do is to build an SAI without any preconceived
ideas of right and wrong, and let it figure it out for itself.  What makes
you think that protecting humanity is the greatest good in the universe?


The fact that we happen to be part of humanity, I'd presume.

As there's no such thing as an objectively greatest good in the
universe (Hume's Guillotine and all that), it's up to us to determine
some basic starting points. If we don't provide a mind *any*
preconceived ideas of right and wrong, then it can't develop any on
its own, either. All ethical systems need at least one axiom to build
upon, and responsible FAI developers will pick the axioms so that
we'll end up in a Nice Place To Live.

(Why? Because humanity ending up in a Nice Place To Live is a Nice
Thing To All The People Living In The Nice Place In Question, d'uh.
;))

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]