On 10/10/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
Hank,

On 10/10/06, Hank Conn <[EMAIL PROTECTED] > wrote:
> The all-encompassing definition of the Singularity is the point at which an
> intelligence gains the ability to recursively self-improve the underlying
> computational processes of its intelligence.

I already have that ability -- I'm just very slow at exercising it ;-)

Seriously: From a **marketing** perspective, I think it may be
sensible to boil the Singularity down to simplified definitions...

But from the perspective of deeper understanding, I don't see why it's
critical to agree on a single definition, or that there be a compact
and crisp definition.  It's a complex world and these are complex
phenomena we're talking about, as yet dimly understood.
 
This is all true. If we are talking about 'all-encompassing definitions', though, I still would have to stick with the one proposed above (actually- I'm going to tweak it a bit- see below). Like Michael said, it is kind of mundane in a way, if you aren't aware of developments in neuroscience, nanotechnology, and particularly AGI. The whole premis and essence of the Singularity is really about this so called 'recursive self-improvement', or 'intelligence explosion'- the idea that you can improve and optimize the underlying computational processing of your intelligence- which thereby enables you to improve and optimize better and faster, and so on.
 
(2) The all-encompassing definition of the Singularity is the process by which an intelligence recursively self-improves the underlying computational processes of its intelligence.
 
My problem with Michael's original definition was the statement about producing a genetically engineered child that was smarter-than-human, and allowing that to be defined as the Singularity. I think in order for a point in this recursive self-improving process to be defined as a Singularity, it must be a process that involves a particular "self", so to speak.
 
In other words, if he had said that a researcher genetically modifies *himself* to be smarter-than-human- as a particular point in this researcher's continuing recursive self-improvement process- that would definitely qualify as a Singularity.
 
What Michael is saying is that we should pick out a specific point in this process- specifically that point at which this process produces an intelligence greater-than-human- and label that point the Singularity.
 
I think there is some inherent ambiguity in the concept of the Singularity about its definition as a process, or a specific point in this process (i.e. smarter than human).
 
Far more problematic is a fundamental ambiguity about the definition of "intelligence" itself, which, after consideration, makes me want to scrap my entire definition and agree completely with Michael after all, under a specific understanding of what "intelligence" would mean (which i'm going to take here in terms of 'optimization power'- where optimization power is defined in terms of an 'optimization target(s)'). 
 
That is, the smartest human is the one with the most optimization power over this Universe- and by that definition- is the one who is most effectively improving their optimization power over the Universe. In that sense, the first entity to gain intelligence that is smarter-than-human, would by definition have more optimization power than any human and be more effectively improving their optimization power than any human (and, of course, not BE human)- AND not be acting to the end of any human's optimization target (otherwise, if this entity were acting to the end of some human's optimization target, its optimization would count equally for the human as it would for itself, and hence it would not be smarter than human).
 
After effectively annihilating any sensibility in this conversation and confusing the heck out of myself and others, I'll let somehow have the opportunity to try to pick up the pieces ... while I ponder this over :)
 
 
-hank
 
 
-- Ben G


> On 10/10/06, Michael Anissimov <[EMAIL PROTECTED] > wrote:
> > The Singularity definitions being presented here are incredibly
> > confusing and contradictory.  If I were a newcomer to the community
> > and saw this thread, I'd say that this word "Singularity" is so poorly
> > defined, it's useless.  Everyone is talking past each other.  As Nick
> > Hay has pointed out, the Singularity was originally defined as
> > smarter-than-human intelligence, and I think that this definition
> > remains the most relevant, concise, and resistant to
> > misinterpretation.
> >
> > It's not about technological progress.  It's not about experiencing an
> > artificial universe by being plugged into a computer. It's not about
> > human intelligence merging with computing technology.  It's not about
> > things changing so fast that we can't keep up, or the accretion of
> > some threshold level of knowledge.  All of these things *might* indeed
> > follow from a Singularity, but might not, making it important to
> > distinguish between the likely *effects* of a Singularity and *what
> > the Singularity actually is*.  The Singularity *actually is* the
> > creation of smarter-than-human intelligence, but there are many
> > speculative scenarios about what would happen thereafter as there are
> > people who have heard about the idea.
> >
> > The number of completely incompatible Singularity definitions being
> > tossed around on this list underscores the need for a return to the
> > original, simple, and concise definition, which, in that it doesn't
> > make a million and one side claims, is also the easiest to explain to
> > those being exposed to the idea for the first time.  We have to define
> > our terms to have a productive discussion, and the easiest way to
> > define a contentious term is to make the definition as simple as
> > possible.  The reason that so many in the intellectual community see
> > Singularity discussion as garbage is because there is so little
> > definitional consensus that it's close to impossible to determine
> > what's actually being discussed.
> >
> > Smarter-than-human intelligence.  That's all.  Whether it's created
> > through Artificial Intelligence, Brain-Computer Interfacing,
> > neurosurgery, genetic engineering, or the fundamental particles making
> > up my neurons quantum-tunneling into a smarter-than-human
> > configuration - the Singularity is the point at which our ability to
> > predict the future breaks down because a new character is introduced
> > that is different from all prior characters in the human story.
> >
> > The creation of smarter-than-human intelligence is called "the
> > Singularity" by analogy to a gravitational singularity, not a
> > mathematical singularity.  Nothing actually goes to infinity.  In
> > physics, our models of black hole spacetimes spit out infinities
> > because they're fundamentally flawed, not because nature itself is
> > actually producing infinities.  Any relationship between the term
> > Singularity and the definition of singularity that means "the quality
> > of being one of a kind" is coincidental.
> >
> > The analogy of our inability to predict the physics past the event
> > horizon of a black hole with the creation of superintelligence is apt,
> > because we know for a fact that our minds are conditioned, both
> > genetically and experientially, to predict the actions of other human
> > minds, not smarter-than-human minds.  We can't predict what a
> > smarter-than-human mind would think or do, specifically.  But we can
> > predict it in broad outlines - we can confidently say that a
> > smarter-than-human intelligence will 1) be smarter-than-human (by
> > definition), 2) have all the essential properties of an intelligence,
> > including the ability to model the world, make predictions, synthesize
> > data, formulate beliefs, etc., 3) have starting characteristics
> > dictated by the method of its creation, 4) have initial motivations
> > dictated by its prior, pre-superintelligent form, 5) not necessarily
> > display characteristics similar to its human predecessors, and so on.
> > We can predict that a superintelligence would likely be capable of
> > putting a lot of optimization pressure behind its goals.
> >
> > The basic Singularity concept is incredibly mundane.  In the midst of
> > all this futuristic excitement, we sometimes forget this.  A single
> > genetically engineered child born with a substantially
> > smarter-than-human IQ would constitute a Singularity, because we would
> > have no ability to predict the specifics of what it would do, whereas
> > we have a much greater ability to predict the actions of typical
> > humans.  It's also worth pointing out that the Singularity is an
> > event, like the first nuclear test, not a thing, like the first nuke
> > itself.  It heralds an irreversible transition to a new era, but our
> > guesses at the specifics of that era are inextricably tied to the real
> > future conditions under which we make that transition.
> >
> > The fact that it is sometimes difficult to predict the actions of
> > everyday humans does not doom this definition of the Singularity.  The
> > fact that "smarter-than-human" is a greyscale rather than
> > black-and-white does not condemn it either.  The Singularity is one of
> > those things that we'd probably recognize if we saw it, but because it
> > hasn't happened yet it's very difficult to talk about coherently.
> >
> > The Singularity is frequently associated with technology simply
> > because technology is the means by which agents that can't mold their
> > environments directly are able to get things done in a limited time.
> > So by default, we assume that a superintelligence would use technology
> > to get things done, and use a lot of it.  But there are possible
> > beings that need no technology to accomplish significant goals.  For
> > example, in the future there might be a being that can build a nuclear
> > reactor simply by swallowing uranium and internally processing it into
> > the right configuration. No "technology" required.
> >
> > The Singularity would still be possible if technological process were
> > slowed down or halted.  It would still be possible (albeit difficult)
> > if every computer on the planet were smashed to pieces.  It would be
> > possible even if it turned out that intelligence can't exist inside a
> > computer.
> >
> > A Singularity this century could easily be stopped, for example if a
> > disease wiped out half of humanity, or a global authoritarian regime
> > forbade research in that direction, or if a nuclear war ejected
> > sufficient dust into the air to shut down photosynthesis.  The
> > Singularity is far from inevitable.
> >
> > The Singularity can be a bad thing, resulting in the death of all
> > human beings, or a good thing, such that every single human being on
> > earth can explicitly say that they are glad that it happened.  There
> > are also different shades of good: for example, a Singularity that
> > results in the universal availability of "genie machines" could
> > eliminate all journeys of value, by taking us right to the destination
> > whether we want it or not.
> >
> > As we can see, this definition of the Singularity I'm presenting
> > encompasses a lot of possibilities.  That's part of the elegance of
> > it.  By making a minimal amount of assumptions, it requires the least
> > amount of evidence to back it up.  All it requires is that humans
> > aren't the smartest physically possible beings in the universe, and
> > that we will some day have the ability to either upgrade our brains,
> > or create new brains that are smarter than us by design.
> >
> > --
> > Michael Anissimov
> > Lifeboat Foundation      http://lifeboat.com
> > http://acceleratingfuture.com/michael/blog
> >
> > -----
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> >
> http://v2.listbox.com/member/[EMAIL PROTECTED]
> >
>
>  ________________________________
>
>  This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
> or change your options, please go to:
> http://v2.listbox.com/member/[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to