Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-03 Thread Jiri Jelinek
Matt,

Create a numeric "pleasure" variable in your mind, initialize it with
a positive number and then keep doubling it for some time. Done? How
do you feel? Not a big difference? Oh, keep doubling! ;-))

Regards,
Jiri Jelinek

On Nov 3, 2007 10:01 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> --- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:
> > If bliss without intelligence is the goal of the machines you imaging
> > running the world, for the cost of supporting one human they could
> > probably keep at least 100 mice in equal bliss, so if they were driven to
> > maximize bliss why wouldn't they kill all the grooving humans and replace
> > them with grooving mice.  It would provide one hell of a lot more bliss
> > bang for the resource buck.
>
> Allow me to offer a less expensive approach.  Previously on the singularity
> and sl4 mailing lists I posted a program that can feel pleasure and pain: a 2
> input programmable logic gate trained by reinforcement learning.  You give it
> an input, it responds, and you reward it.  In my latest version, I automated
> the process.  You tell it which of the 16 logic functions you want it to learn
> (AND, OR, XOR, NAND, etc), how much reward to apply for a correct output, and
> how much penalty for an incorrect output.  The program then generates random
> 2-bit inputs, evaluates the output, and applies the specified reward or
> punishment.  The program runs until you kill it.  As it dies it reports its
> life history (its age, what it learned, and how much pain and pleasure it
> experienced since birth).
>
> http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)
>
> To put the program in an eternal state of bliss, specify two positive numbers,
> so that it is rewarded no matter what it does.  It won't learn anything, but
> at least it will feel good.  (You could also put it in continuous pain by
> specifying two negative numbers, but I put in safeguards so that it will die
> before experiencing too much pain).
>
> Two problems remain: uploading your mind to this program, and making sure
> nobody kills you by turning off the computer or typing Ctrl-C.  I will address
> only the first problem.
>
> It is controversial whether technology can preserve your consciousness after
> death.  If the brain is both conscious and computable, then Chalmers' fading
> qualia argument ( http://consc.net/papers/qualia.html ) suggests that a
> computer simulation of your brain would also be conscious.
>
> Whether you *become* this simulation is also controversial.  Logically there
> are two of you with identical goals and memories.  If either one is killed,
> then you are in the same state as you were before the copy is made.  This is
> the same dilemma that Captain Kirk faces when he steps into the transporter to
> be vaporized and have an identical copy assembled on the planet below.  It
> doesn't seem to bother him.  Does it bother you that the atoms in your body
> now are not the same atoms that made up your body a year ago?
>
> Let's say your goal is to stimulate your nucleus accumbens.  (Everyone has
> this goal; they just don't know it).  The problem is that you would forgo
> food, water, and sleep until you died (we assume, from animal experiments).
> The solution is to upload to a computer where this could be done safely.
>
> Normally an upload would have the same goals, memories, and sensory-motor I/O
> as the original brain.  But consider the state of this program after self
> activation of its reward signal.  No other goals are needed, so we can remove
> them.  Since you no longer have the goal of learning, experiencing sensory
> input, or controlling your environment, you won't mind if we replace your I/O
> with a 2 bit input and 1 bit output.  You are happy, no?
>
> Finally, if your memories were changed, you would not be aware of it, right?
> How do you know that all of your memories were not written into your brain one
> second ago and you were some other person before that?  So no harm is done if
> we replace your memory with a vector of 4 real numbers.  That will be all you
> need in your new environment.  In fact, you won't even need that because you
> will cease learning.
>
> So we can dispense with the complex steps of making a detailed copy of your
> brain and then have it transition into a degenerate state, and just skip to
> the final result.
>
> Step 1. Download, compile, and run autobliss 1.0 in a secure location with any
> 4-bit logic function and positive reinforcement for both right and wrong
> answers, e.g.
>
>   g++ autobliss.cpp -o autobliss.exe
>   autobliss 0110 5.0 5.0  (or larger numbers for more pleasure)
>
> Step 2. Kill yourself.  Upload complete.
>
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe o

Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-03 Thread Matt Mahoney
--- "Edward W. Porter" <[EMAIL PROTECTED]> wrote:
> If bliss without intelligence is the goal of the machines you imaging
> running the world, for the cost of supporting one human they could
> probably keep at least 100 mice in equal bliss, so if they were driven to
> maximize bliss why wouldn't they kill all the grooving humans and replace
> them with grooving mice.  It would provide one hell of a lot more bliss
> bang for the resource buck.

Allow me to offer a less expensive approach.  Previously on the singularity
and sl4 mailing lists I posted a program that can feel pleasure and pain: a 2
input programmable logic gate trained by reinforcement learning.  You give it
an input, it responds, and you reward it.  In my latest version, I automated
the process.  You tell it which of the 16 logic functions you want it to learn
(AND, OR, XOR, NAND, etc), how much reward to apply for a correct output, and
how much penalty for an incorrect output.  The program then generates random
2-bit inputs, evaluates the output, and applies the specified reward or
punishment.  The program runs until you kill it.  As it dies it reports its
life history (its age, what it learned, and how much pain and pleasure it
experienced since birth).

http://mattmahoney.net/autobliss.txt  (to run, rename to autobliss.cpp)

To put the program in an eternal state of bliss, specify two positive numbers,
so that it is rewarded no matter what it does.  It won't learn anything, but
at least it will feel good.  (You could also put it in continuous pain by
specifying two negative numbers, but I put in safeguards so that it will die
before experiencing too much pain).

Two problems remain: uploading your mind to this program, and making sure
nobody kills you by turning off the computer or typing Ctrl-C.  I will address
only the first problem.

It is controversial whether technology can preserve your consciousness after
death.  If the brain is both conscious and computable, then Chalmers' fading
qualia argument ( http://consc.net/papers/qualia.html ) suggests that a
computer simulation of your brain would also be conscious.

Whether you *become* this simulation is also controversial.  Logically there
are two of you with identical goals and memories.  If either one is killed,
then you are in the same state as you were before the copy is made.  This is
the same dilemma that Captain Kirk faces when he steps into the transporter to
be vaporized and have an identical copy assembled on the planet below.  It
doesn't seem to bother him.  Does it bother you that the atoms in your body
now are not the same atoms that made up your body a year ago?

Let's say your goal is to stimulate your nucleus accumbens.  (Everyone has
this goal; they just don't know it).  The problem is that you would forgo
food, water, and sleep until you died (we assume, from animal experiments). 
The solution is to upload to a computer where this could be done safely.

Normally an upload would have the same goals, memories, and sensory-motor I/O
as the original brain.  But consider the state of this program after self
activation of its reward signal.  No other goals are needed, so we can remove
them.  Since you no longer have the goal of learning, experiencing sensory
input, or controlling your environment, you won't mind if we replace your I/O
with a 2 bit input and 1 bit output.  You are happy, no?

Finally, if your memories were changed, you would not be aware of it, right? 
How do you know that all of your memories were not written into your brain one
second ago and you were some other person before that?  So no harm is done if
we replace your memory with a vector of 4 real numbers.  That will be all you
need in your new environment.  In fact, you won't even need that because you
will cease learning.

So we can dispense with the complex steps of making a detailed copy of your
brain and then have it transition into a degenerate state, and just skip to
the final result.

Step 1. Download, compile, and run autobliss 1.0 in a secure location with any
4-bit logic function and positive reinforcement for both right and wrong
answers, e.g.

  g++ autobliss.cpp -o autobliss.exe
  autobliss 0110 5.0 5.0  (or larger numbers for more pleasure)

Step 2. Kill yourself.  Upload complete.



-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60819880-7c826a


Re: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread Jiri Jelinek
Lukas,

If you were right, pleasure through drugs or brain-electrodes wouldn't work.

Regards,
Jiri


On Nov 3, 2007 5:08 PM, Lukas Stafiniak <[EMAIL PROTECTED]> wrote:
> I think that logically pleasure / happiness is just an indicator of
> fulfillment (and rate of fulfillment, perceived stability etc.) of
> interacting "values". "Values" are high-level, abstract goals and
> characteristics of being: understanding, progress, sustaining of
> diversity, wellbeing in general -- love -- communion with other
> persons, exploration, truth (as in "for real", "being oneself" etc.),
> realization of talent... "Values" form a positive feedback. This "pro
> life", "good" positive feedback seems to be the defining bottom line.
> "Values" only have meaning in time, because they are about what
> actions to take. In the limit there is God, beyond time, outside of
> the Karl Jaspers' bottle. I think that personhood is directly
> essential to this, not qualia.
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60807697-f1fa9c


RE: [agi] can superintelligence-augmented humans compete

2007-11-03 Thread Edward W. Porter
Future neuroscience, psychology, and smart drugs, would help a lot.  But I
don't think they alone can help us keep pace with the power of
superintelligence that can be built in a decade or two.

Ed Porter


-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 03, 2007 5:21 PM
To: agi@v2.listbox.com
Subject: Re: [agi] can superintelligence-augmented humans compete


Edward,

One thought that occurs to me - & I'm sure s.o. has pursued this line, but
I can't think of anyone - if we have full control of our brain, our
intelligence will be massively improved. And we should be able to get
something like full control with future neuroscience and psychology.

The "problem" of the brain is that it has the best user interface ever
invented - effectively a blank screen - no "File" "Open" "Search" etc -
just blank. From there you can do anything - explore a subject in a vast
variety of modes -  remember, visualise, generalise, particularise, past
tense, future tense, etc. etc. Beautifully convenient. Steve Jobs would
kill for something like that.

Except that the consequence of this arrangement is that we sacrifice
control - we don't know exactly what faculties we have, and where our
memories are stored. [And as a result arguably everyone fails to use or
develop important faculties, and most of the human race, among other
things, never really get to use their creative faculties].

Now if we did have control, and could easily find any memory in our brain,
and really were in full possession of our faculties, we would all be
vastly more efficient and effective thinkers. (An awful lot of time at the
moment in thinking, can be spent just racking one's brain for appropriate
memories).

P.S. Of course this kind of thinking is irrelevant to all current AGI's
because none of them have selves driving the machine and using or not
using faculties, (i.e. bicameral minds). But robots, it already seems
clear, will.



I WOULD BE INTERESTING IN OTHER PEOPLE'S THOUGHTS ON THESE ISSUES, BECAUSE
THEY SEEMS TO BE IMPORTANT ONES IN DETERMINING HOW IMPORTANT HUMAN WETWARE
AND HUMAN CONSCIOUSNESS CAN CONTINUE TO BE IN THE SUPERINTELLIGENT FUTURE.



Ed Porter



  _

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?
 &

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60787805-a59095

RE: [agi] can superintelligence-augmented humans compete

2007-11-03 Thread Edward W. Porter
In my below recent list of ways to improve the power of human intelligent
augmentation I forgot to think about possible ways to actually increase
the bandwidth of the top level decision making of the brain, which I had
listed as a real problem but had made no suggestions of how to improve
(other than mentioning a conceivable, but un-defined sharing of
consciousnesses that would be more than just implanted I/O between brain
and AGI).



On way to improve the bandwidth of the top level of human decision making
would be to replace or augment the brain's machinery for performing it,
which is probably in the prefrontal cortex, basil ganglia, and general
cortico-thalamic loop.  Some include the Cerebellum in such mechanism for
its role in fine tuning behaviors into the current context (including very
time sensitive feedback) and by controlling the timing of learned
sequential behaviors, including mental behaviors.



Some possible approaches



-A--have the AGI learn the goal system of the human brain and have
delegated authority to make decisions on its own, much as the basil
ganglia often does relative to our conscious decision processes.  (i.e.,
if you drop something you are often first aware of that fact by the
subconscious response your body is making to catch it.)  Such a system
could respond in real time to complex inputs thousands or millions of
times faster than a human.  Although it might not always do what we want,
neither does our basil ganglia.  It might be just as faithful to our goals
and emotions as the basil ganglia,  Such a system could help us keep pace
with many superintelligences, when, for example trying to prevent them
from infecting our trusted machines.



-B--Create a super intelligent basil ganglia (either by replacement or
supplementation) that receives the inputs from the portions of the cortex
the basil ganglia currently does, but also receives inputs from the AGI
and according to a goal system learned from the human mind, performs the
go-no-go, behavior/attention selecting function of the basil ganglia, on a
mix of inputs from the cortex and AGI's.  This would help prevent
behavioral conflicts between behaviors by a system like A and the rest of
the human brain



---or a combination of these two, or these two with other suggestions made
below.



Perhaps such a close connection between  the top level of an AGI and a
human brain could help provide the type of complex motivational grounding
and biases Richard Loosemore was talking about in his November 02, 2007
11:15 AM post as the proper approach for keeping AGI's subservient to
human interests.  Of course, thereis still no guarantee they would keep
them subservient until the end of history.



I know this is wacked-out stuff, but it actually might be relevant to the
future of mankind.



Ed Porter

-Original Message-
From: Edward W. Porter [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 03, 2007 4:42 PM
To: agi@v2.listbox.com
Subject: [agi] can superintelligence-augmented humans compete



Can, and how can, our human descendants compete with superintelligences,
other than by deserting human wetware and declaring machines to be our
descendants?



There are real issues about the extent to which any intelligence that has
a human brain at its top level of control can compete with machines that
conceivably could have a top level decision process with hundreds or
perhaps millions of times the bandwidth.



There are also questions of how much bandwidth of machine intelligence can
be pumped into, or shared, with a human consciousness and/or subconscious,
i.e., how much of the superintelligence we humans could be conscious of
and/or effectively use in our subconscious.



It would seem that if the human brain is not at the top level of decision
making, we would no longer be in control.  And if our consciousnesses are
not capable of appreciating more than a small part of what the
superintelligence we are part of is doing, we won't even be aware of
exactly what most of the bionic entity were are part of is thinking.



(Of course, this is somewhat similar to the way the subconscious affects
us.  )



(In fact, it would not be that hard to have a system where the
superintelligence only communicates to our brain its consciousness, or
portions of its consciousness that its learning indicate will have
importance or interest to us, so that it would be acting somewhat like an
extended subconsciousness that would occasionally pop ideas up into our
subconsciousness or consciousness.  This would greatly increase our mental
powers, particularly if we had the capability to send information down to
control it, give it sub-goals, or queries, etc.  )



(But this would not solve the limited bottle neck of the human brain's top
level decision making)



So we would not be keeping up with the machines.  They would be taking us
along for the ride - that is, for as long as they desired to continue
doing so.



OF COURSE IT IS AT LEAST CONCEIVABLE

Re: [agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans compete

2007-11-03 Thread Jiri Jelinek
On Nov 3, 2007 1:17 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Isn't there a fundamental contradiction in the idea of something that
> can be a "tool" and also be "intelligent"?

No. It could be just a sophisticated search engine.

> What I mean is, is the word "tool" usable in this context?

IMO yes.

> To put it the other way around, consider the motivational system of the
> best kind of AGI:  it is motivated by a balanced set of desires that
> include the desire to explore and learn, and empathy for the human
> species.  By definition, I would think, this simple cluster of desires
> and empathic motivations *are* the things that "give it pleasure".

In short, software (if that's still what we are talking about) needs
commands and rules (not "desires" and "pleasure") to do what we want
it to do.

> But the thing is, you can change your mind to go and get pleasure in a
> different way sometimes.  For example, you could decide to transfer your
> mind into the cognitive system of an artificial tiger for a week, and
> during that time you would get pleasure from stalking and jumping onto
> predator animals, or basking in the sun, or meeting lady tigers.  After
> automatically being yanked back into human mental form again at the end
> of the holiday, would you say that "you" get pleasure from hunting
> predators, etc? Do you get pleasure from the idea of [exploring
> different sensoria]?

Different activities/inputs stimulate our pleasure center (a set of
brain structures) in different ways, moving us through the pleasure
scope. When we learn how to fully control [and improve] our pleasure
center then, I suppose, indirect stimulations through real life
scenarios (as we know it today) will become less desirable &
eventually not preferred. After figuring out the "measure pleasure"
problem, coolest pleasure wave generators will be researched and the
real life sensation will be just unable to compete with that.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60782742-9c573e


Re: [agi] can superintelligence-augmented humans compete

2007-11-03 Thread Mike Tintner
MessageEdward, 

One thought that occurs to me - & I'm sure s.o. has pursued this line, but I 
can't think of anyone - if we have full control of our brain, our intelligence 
will be massively improved. And we should be able to get something like full 
control with future neuroscience and psychology.

The "problem" of the brain is that it has the best user interface ever invented 
- effectively a blank screen - no "File" "Open" "Search" etc - just blank. From 
there you can do anything - explore a subject in a vast variety of modes -  
remember, visualise, generalise, particularise, past tense, future tense, etc. 
etc. Beautifully convenient. Steve Jobs would kill for something like that.

Except that the consequence of this arrangement is that we sacrifice control - 
we don't know exactly what faculties we have, and where our memories are 
stored. [And as a result arguably everyone fails to use or develop important 
faculties, and most of the human race, among other things, never really get to 
use their creative faculties]. 

Now if we did have control, and could easily find any memory in our brain, and 
really were in full possession of our faculties, we would all be vastly more 
efficient and effective thinkers. (An awful lot of time at the moment in 
thinking, can be spent just racking one's brain for appropriate memories).

P.S. Of course this kind of thinking is irrelevant to all current AGI's because 
none of them have selves driving the machine and using or not using faculties, 
(i.e. bicameral minds). But robots, it already seems clear, will.
   

  I WOULD BE INTERESTING IN OTHER PEOPLE'S THOUGHTS ON THESE ISSUES, BECAUSE 
THEY SEEMS TO BE IMPORTANT ONES IN DETERMINING HOW IMPORTANT HUMAN WETWARE AND 
HUMAN CONSCIOUSNESS CAN CONTINUE TO BE IN THE SUPERINTELLIGENT FUTURE.

   

  Ed Porter 

   

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60781688-2f20e1

Re: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread Lukasz Stafiniak
On 11/3/07, Lukasz Stafiniak <[EMAIL PROTECTED]> wrote:
> >
> I think that logically pleasure / happiness is just an indicator of
> fulfillment (and rate of fulfillment, perceived stability etc.) of
> interacting "values".
[...]
> "Values" only have meaning in time, because they are about what
> actions to take.

Well, I have myself fallen prey to the circularity I wanted to avoid...

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60781183-d1a623


RE: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread Edward W. Porter
I have skimmed many of the postings in this thread, and (although I have
not seen anyone say so) to a certain extent Jiri's positiion seems
somewhat similar to that in certain Eastern meditative traditions or
perhaps in certain Christian or other mystical "Blind Faiths."

I am not a particularly good meditator, but when I am having trouble
sleeping, I often try to meditate.  There are moments when I have rushes
of pleasure from just breathing, and times when a clear empty mind is
calming and peaceful.

I think such times are valuable.  I like most people would like more
moments of bliss in my life.  But I guess I am too much of a product of my
upbringing and education to want only bliss. I like to create things and
ideas.

And besides the notion of machines that could be trusted to run the world
for us while we seek to surf the endless rush and do nothing to help
support our own existence or that of the machines we would depend upon,
strikes me a nothing more than wishful thinking.  The biggest truism about
altruism is that it has never been the dominant motivation in any system
that has ever had it, and there is no reason to believe that it could
continue to be in machines for any historically long period of time.
Survival of the fittest applies to machines as well as biological life
forms.

If bliss without intelligence is the goal of the machines you imaging
running the world, for the cost of supporting one human they could
probably keep at least 100 mice in equal bliss, so if they were driven to
maximize bliss why wouldn't they kill all the grooving humans and replace
them with grooving mice.  It would provide one hell of a lot more bliss
bang for the resource buck.

Ed Porter


-Original Message-
From: Jiri Jelinek [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 03, 2007 3:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Nirvana? Manyana? Never!


On Nov 3, 2007 12:58 PM, Mike Dougherty <[EMAIL PROTECTED]> wrote:
> You are describing a very convoluted process of drug addiction.

The difference is that I have safety controls built into that scenario.

> If I can get you hooked on heroine or crack cocaine, I'm pretty
> confident that you will abandon your desire to produce AGI in order to
> get more of the drugs to which you are addicted.

Right. We are wired that way. Poor design.

> You mentioned in an earlier post that you expect to have this
> monstrous machine invade my world and 'offer' me these incredible
> benefits.  It sounds to me like you are taking the blue pill and
> living contentedly in the Matrix.

If the AGI that controls the Matrix sticks with the goal system initially
provided by the blue pill party then why would we want to sacrifice the
non-stop pleasure? Imagine you would get periodically unplugged to double
check if all goes well outside - over and over again finding (after
very-hard-to-do detailed investigation) that things go much better than
how would they likely go if humans were in charge. I bet your unplug
attitude would relatively soon change to something like "sh*t, not
again!".

> If you are going to proselytize
> that view, I suggest better marketing.  The intellectual requirements
> to accept AGI-driven nirvana imply the rational thinking which
> precludes accepting it.

I'm primarily a developer, leaving most of the marketing stuff to others
;-). What I'm trying to do here is to take a bit closer look at the human
goal system and investigate where it's likely to lead us. My impression is
that most of us have only very shallow understanding of what we really
want. When messing with AGI, we better know what we really want.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60780377-9843bd

Re: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread Lukasz Stafiniak
On 11/3/07, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
>
> I'm primarily a developer, leaving most of the marketing stuff to
> others ;-). What I'm trying to do here is to take a bit closer look at
> the human goal system and investigate where it's likely to lead us. My
> impression is that most of us have only very shallow understanding of
> what we really want. When messing with AGI, we better know what we
> really want.
>
I think that logically pleasure / happiness is just an indicator of
fulfillment (and rate of fulfillment, perceived stability etc.) of
interacting "values". "Values" are high-level, abstract goals and
characteristics of being: understanding, progress, sustaining of
diversity, wellbeing in general -- love -- communion with other
persons, exploration, truth (as in "for real", "being oneself" etc.),
realization of talent... "Values" form a positive feedback. This "pro
life", "good" positive feedback seems to be the defining bottom line.
"Values" only have meaning in time, because they are about what
actions to take. In the limit there is God, beyond time, outside of
the Karl Jaspers' bottle. I think that personhood is directly
essential to this, not qualia.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60779001-5bd28f


[agi] can superintelligence-augmented humans compete

2007-11-03 Thread Edward W. Porter
Can, and how can, our human descendants compete with superintelligences,
other than by deserting human wetware and declaring machines to be our
descendants?



There are real issues about the extent to which any intelligence that has
a human brain at its top level of control can compete with machines that
conceivably could have a top level decision process with hundreds or
perhaps millions of times the bandwidth.



There are also questions of how much bandwidth of machine intelligence can
be pumped into, or shared, with a human consciousness and/or subconscious,
i.e., how much of the superintelligence we humans could be conscious of
and/or effectively use in our subconscious.



It would seem that if the human brain is not at the top level of decision
making, we would no longer be in control.  And if our consciousnesses are
not capable of appreciating more than a small part of what the
superintelligence we are part of is doing, we won't even be aware of
exactly what most of the bionic entity were are part of is thinking.



(Of course, this is somewhat similar to the way the subconscious affects
us.  )



(In fact, it would not be that hard to have a system where the
superintelligence only communicates to our brain its consciousness, or
portions of its consciousness that its learning indicate will have
importance or interest to us, so that it would be acting somewhat like an
extended subconsciousness that would occasionally pop ideas up into our
subconsciousness or consciousness.  This would greatly increase our mental
powers, particularly if we had the capability to send information down to
control it, give it sub-goals, or queries, etc.  )



(But this would not solve the limited bottle neck of the human brain's top
level decision making)



So we would not be keeping up with the machines.  They would be taking us
along for the ride - that is, for as long as they desired to continue
doing so.



OF COURSE IT IS AT LEAST CONCEIVABLE THAT WAYS COULD FOUND TO MERGE AND
SHARE HUMAN AND VASTLY SUPERHUMAN-MACHINE CONSCIOUSNESSES.  I DON'T KNOW
OF ANY, BUT I WOULD BE INTERESTING IN ANY FRACTIONALLY SOLID IDEAS ABOUT
THIS FROM READERS.



I currently tend to think of consciousness as massive spreading activation
in the human brain from certain sets of patterns (those in the mind
theater's spotlight) to parts much of the subconscious the mind theater's
audience).  In this mind theater the audience is interactive.  Different
audience member had different things in their heads and respond to
different activations and successions of activations in different ways.
Certain activations might cause one or more audience members to shout out,
and the system controlling the spot light might then put a spot on them.



I think of consciousness and subconsciousness in an AGI in a similar
manner, but I do not know how much and in exactly which ways being inside
such a machine consciousness would be like being inside my own.  It would
have self awareness and grounding for its qualia, but I don't know how
these things would seem like on the inside. (Their "red" would be
something that filed areas in a 2D visual representations in a way that
was similar for the strips on American flags, Campbell soup cans, and
blood, but would it be my "red"?  That is a question we humans have been
asking about each other for a long time.)



In any case, other than having certain number of electrical links between
nodes and links in a superintelligence and neurons in the brain, it is not
clears how the two could meaningfully share their consciousnesses, and it
is not clear what the bandwidth of such links could be, how much bandwidth
the human brain is capable of making sense of (how much we currently can
make sense of is perhaps the best current indicator), and how much of the
human brain should be given over to such links.



The questions is, how much better than a good video monitor and speaker
system on the input side could such links be.  Presumably they could
communicate semantic knowledge much faster, but how much, I haven't a
clue.  The improvement in bandwidth could be much greater in the reverse
direction, from the brain out. Since speech, gestures, mouse, and
keyboard, are about our only current output links.



It seems to me that some of the future options for better human
intelligent augmentation might include"



---personal AGIs connected to global AGI moderated net

---(early case) retinal scanning glasses with eye tracking, headphones,
video cameras, microphones, and pickups for sub-vocalization, heart rate,
skin conductivity, etc. that let humans selectively see computers screens
and hear computer output an any time and let the human control the AGI by
eye pointing and blinking guestures, sub-vocalization, emotional
responses, etc.

---biogenetic modification of the brain and smart drugs

---Kurzweil's little nanobots navigating into cortical columns and
wirelessly receiving inputs allowing them to provided equivalent, say 

Re: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread Jiri Jelinek
On Nov 3, 2007 12:58 PM, Mike Dougherty <[EMAIL PROTECTED]> wrote:
> You are describing a very convoluted process of drug addiction.

The difference is that I have safety controls built into that scenario.

> If I can get you hooked on heroine or crack cocaine, I'm pretty confident
> that you will abandon your desire to produce AGI in order to get more
> of the drugs to which you are addicted.

Right. We are wired that way. Poor design.

> You mentioned in an earlier post that you expect to have this
> monstrous machine invade my world and 'offer' me these incredible
> benefits.  It sounds to me like you are taking the blue pill and
> living contentedly in the Matrix.

If the AGI that controls the Matrix sticks with the goal system
initially provided by the blue pill party then why would we want to
sacrifice the non-stop pleasure? Imagine you would get periodically
unplugged to double check if all goes well outside - over and over
again finding (after very-hard-to-do detailed investigation) that
things go much better than how would they likely go if humans were in
charge. I bet your unplug attitude would relatively soon change to
something like "sh*t, not again!".

> If you are going to proselytize
> that view, I suggest better marketing.  The intellectual requirements
> to accept AGI-driven nirvana imply the rational thinking which
> precludes accepting it.

I'm primarily a developer, leaving most of the marketing stuff to
others ;-). What I'm trying to do here is to take a bit closer look at
the human goal system and investigate where it's likely to lead us. My
impression is that most of us have only very shallow understanding of
what we really want. When messing with AGI, we better know what we
really want.

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60767090-3c4431


RE: [agi] Can humans keep superintelligences under control

2007-11-03 Thread Edward W. Porter
Richard in your November 02, 2007 11:15 AM post you stated:

"If AI systems are built with motivation systems that are stable, then we
could predict that they will remain synchronized with the goals of the
human race until the end of history."

and

"I can think of many, many types of non-goal-stack motivational systems
for which [Matt's statement about the inherent instability of goal systems
of recursively self improving AGIs] is a complete falsehood."

In your 11/3/2007 1:17 PM post you described what I assume to be such a
suppostedly stable  "non-goal-stack motivational system." as follows:

" consider the motivational system of the
best kind of AGI:  it is motivated by a balanced set of desires that
include the desire to explore and learn, and empathy for the human
species.  By definition, I would think, this simple cluster of desires
and empathic motivations *are* the things that "give it pleasure".

and

"I think that in general, making the AGI as similar to us as possible
(but without the aggressive and dangerous motivations that we are
victims of) would be a good idea simply because we want them to start
out with a strong empathy for us, and we want them to stay that way."

I think this type of motivational system makes a lot of sense, but for all
the reasons stated in my Fri 11/2/2007 2:07 PM post (arguments you have
not responded to) as well as many other reasons, it does appear at all
certain such a motivational system would reliably remain stable and
"synchronized with the goals of the human race until the end of history,"
as you claim.

For example, humans might for short sighted personal gain (such as when
using them in weapon systems) or accidentally alter such a motivational
system.  Or over time the inherent biases that were designed to make AGI's
have empathy for humans, might cause it to have empathy for some humans
more than others, or might cause them to make decisions that they think
are in our best interest, but would not.  Or perhaps AGI robots would
begin to embody the "human features" that they have been taught to be
empathetic to better than people.  Etc.

The world is too complicated and is going to change too rapidly in the
next one hundred, one thousand, or ten thousand years for any goal system
designed circa 2015 to remain appropriate until the end of history -
unless history ends pretty soon.

If I am wrong I would appreciate the enlightenment and increased hope that
would come with being shown how I am wrong.

Ed Porter

-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED]
Sent: Saturday, November 03, 2007 1:17 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Can humans keep superintelligences under control -- can
superintelligence-augmented humans compete


Jiri Jelinek wrote:
>> People will want to enjoy life:  yes.  And they should, of course.
>> But so, of course, will the AGIs.
>
> Giving AGI the ability to enjoy = potentially asking for serious
> trouble. Why shouldn't AGI just work for us like other tools we
> currently have (no joy involved)?

Isn't there a fundamental contradiction in the idea of something that
can be a "tool" and also be "intelligent"?  What I mean is, is the word
"tool" usable in this context?

To put it the other way around, consider the motivational system of the
best kind of AGI:  it is motivated by a balanced set of desires that
include the desire to explore and learn, and empathy for the human
species.  By definition, I would think, this simple cluster of desires
and empathic motivations *are* the things that "give it pleasure".

But the thing is, you can change your mind to go and get pleasure in a
different way sometimes.  For example, you could decide to transfer your
mind into the cognitive system of an artificial tiger for a week, and
during that time you would get pleasure from stalking and jumping onto
predator animals, or basking in the sun, or meeting lady tigers.  After
automatically being yanked back into human mental form again at the end
of the holiday, would you say that "you" get pleasure from hunting
predators, etc?  Do you get pleasure from the idea of [exploring
different sensoria]?  I think the latter would be true, and in the same
way an AGI, being quite close to us in design, could get pleasure from
[exploring different sensoria] without it changing the goals or
motivations of the AGI when it was being its native self.

I think that in general, making the AGI as similar to us as possible
(but without the aggressive and dangerous motivations that we are
victims of) would be a good idea simply because we want them to start
out with a strong empathy for us, and we want them to stay that way.

Does this make sense?

I agree that this is a complicated area, little explored before now.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://ww

Re: [agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans compete

2007-11-03 Thread Richard Loosemore

Jiri Jelinek wrote:

People will want to enjoy life:  yes.  And they should, of course.
But so, of course, will the AGIs.


Giving AGI the ability to enjoy = potentially asking for serious
trouble. Why shouldn't AGI just work for us like other tools we
currently have (no joy involved)?


Isn't there a fundamental contradiction in the idea of something that 
can be a "tool" and also be "intelligent"?  What I mean is, is the word 
"tool" usable in this context?


To put it the other way around, consider the motivational system of the 
best kind of AGI:  it is motivated by a balanced set of desires that 
include the desire to explore and learn, and empathy for the human 
species.  By definition, I would think, this simple cluster of desires 
and empathic motivations *are* the things that "give it pleasure".


But the thing is, you can change your mind to go and get pleasure in a 
different way sometimes.  For example, you could decide to transfer your 
mind into the cognitive system of an artificial tiger for a week, and 
during that time you would get pleasure from stalking and jumping onto 
predator animals, or basking in the sun, or meeting lady tigers.  After 
automatically being yanked back into human mental form again at the end 
of the holiday, would you say that "you" get pleasure from hunting 
predators, etc?  Do you get pleasure from the idea of [exploring 
different sensoria]?  I think the latter would be true, and in the same 
way an AGI, being quite close to us in design, could get pleasure from 
[exploring different sensoria] without it changing the goals or 
motivations of the AGI when it was being its native self.


I think that in general, making the AGI as similar to us as possible 
(but without the aggressive and dangerous motivations that we are 
victims of) would be a good idea simply because we want them to start 
out with a strong empathy for us, and we want them to stay that way.


Does this make sense?

I agree that this is a complicated area, little explored before now.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60744301-905c0f


Re: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread Mike Dougherty
On 11/3/07, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> Ok, shaping the reality gives you pleasure. Machine would read it and
> offer you many orders of magnitude stronger neverending pleasure of
> the same type. And you would say "no, thanks"? There is certain
> pleasure threshold after which the "I want it" gets iresistable no
> matter what risks are involved.

You are describing a very convoluted process of drug addiction.  If I
can get you hooked on heroine or crack cocaine, I'm pretty confident
that you will abandon your desire to produce AGI in order to get more
of the drugs to which you are addicted.

You mentioned in an earlier post that you expect to have this
monstrous machine invade my world and 'offer' me these incredible
benefits.  It sounds to me like you are taking the blue pill and
living contentedly in the Matrix.  If you are going to proselytize
that view, I suggest better marketing.  The intellectual requirements
to accept AGI-driven nirvana imply the rational thinking which
precludes accepting it.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60741497-8715c9


Re: [agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans compete

2007-11-03 Thread Jiri Jelinek
>People will want to enjoy life:  yes.  And they should, of course.
>But so, of course, will the AGIs.

Giving AGI the ability to enjoy = potentially asking for serious
trouble. Why shouldn't AGI just work for us like other tools we
currently have (no joy involved)?

Regards,
Jiri Jelinek

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60740372-3556d1


Re: [agi] NLP + reasoning + conversational state?

2007-11-03 Thread Mike Dougherty
On 11/2/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Google uses a cluster of 10^6 CPUs, enough to keep a copy of the searchable
> part of the Internet in RAM.

And a list of millions of hits is the ideal way to represent the
results, right?  Ask.com is publicly mocking this fact in an effort to
make themselves look better.  Kartoo.com does a good job of presenting
the relationship of search results to each other.

Suppose you get a tip about some cog sci research that might be
relevant to AGI.  You ask one of your undergraduate assistants to dig
up everything they can find about it.  Sure, they use Google.  They
use Lexisnexis.  They use a dozen primary data gathering tools.
Knowing you don't want 4Gb of text, they summarize all the information
into what they believe you are actually asking for - based on earlier
requests you have made, their own understanding of what you are
looking for and whatever they learn during the data collection
process.  A good research assistant gets recruited for graduate work,
a bad research assistant probably gets a pat on the back at the end of
the semester.

My question was about the feasibility of a narrow-AI research agent as
a useful step towards AGI.  Even if it's not fully adaptable for
general tasks, the commercial viability of moderate success would be
profitable.  Or is commercial viability too mundane a consideration
for ivory tower AGI research?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60737904-74aafd


Re: [agi] Connecting Compatible Mindsets

2007-11-03 Thread Jiri Jelinek
YKY,

> If you don't mind, we could use it for this purpose:

Please go ahead. Pick it up if you think it's a good idea. I'm too
busy with other development.

Regards,
Jiri Jelinek

On Nov 3, 2007 4:36 AM, YKY (Yan King Yin)
<[EMAIL PROTECTED]> wrote:
>
>
> On 11/3/07, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> > A problem with AGI developers is that they too often disagree on what
> > the best AGI approach is - which prevents or complicates
> > collaboration. I was once planning to develop an online app where
> > users could add a record about their "AGI under
> > architecture/development" and choose from (+ add to) a list of
> > project-characteristics broken to categories, so other developers
> > could visit and quickly get a good sense who is doing what and how
> > similar are those approaches to their best-AGI-approach-opinion.
> > Unfortunately, with all the projects I'm working on, I'm unlikely to
> > get to it in foreseeable future. If something like that already
> > exists, I would like to know. I saw few hardcoded lists but it would
> > be great to have something what users could update. Maybe we would
> > then be surprised how many people "work on AGI". Most of it would
> > probably be junk projects or just some way-too-general ideas in the
> > air, but the main purpose would be to connect people with similar AGI
> > mindsets. Many AGI developers seem to be on their own. Knowing about
> > one more developer with highly compatible AGI mindset can help to keep
> > ideas in motion and make a difference.
>
> Yes, that's a great idea.  I actually have an empty wiki waiting to be used.
> If you don't mind, we could use it for this purpose:
> http://www.mogoo.com/~aiwiki/aiwiki/index.php/Main_Page
>
> Or if AGIRI can provide such a place, would be nice too.
>
> YKY 
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60736274-6e5bfb


Re: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread BillK
On 11/3/07, Richard Hollerith wrote:
> BillK writes:
> >>   Forget, for the moment, what you think is
> >> possible - if you could have anything you wanted, is this the end you
> >> would wish for yourself, more than anything else?
> >
> >Absolute Power over others and being worshipped as a God would be neat as 
> >well.
> >
> >Getting a dog is probably the nearest most humans can get to this.
>
> Is this a joke?
>
> Sometimes certain people can moralize too much or be too serious
> all the time.  Could it be that this is a reaction against
> someone being too serious or too moralistic in the past and your
> wanting to tweak their nose a bit and maybe get them to loosen up
> a little?
>


On some levels.
The best jokes have a lot of truth in them.
Humor is a way of saying what it is forbidden to mention.

Ever seen Bill Hicks in full flow?

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60729119-674d0f


Re: [agi] NLP + reasoning?

2007-11-03 Thread Russell Wallace
On 11/3/07, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> These are the result of very very direct reasoning, very low cpu usage
> (under 2 seconds, except for Lincoln, which had to weed out 20 things
> named "Lincoln County") and yet, its vaguely comparable to something
> that a 6-7-8-9-year-old might produce.
>
> Where is the developmental jump? At the pre-teen level?

I think this is the perfect answer to your question about why natural
language is the wrong place to start.

This isn't intended as personal criticism, but: look at what you just
said. You've started talking about IQ and implying a program is
vaguely comparable in intelligence to a 9 year old human...

Based on a program that Google outperforms by several orders of magnitude.

The problem with natural language is that the bandwidth is so tiny, it
necessarily relies primarily on the reader's imagination. We are
explicitly programmed, in other words, to assume intelligence on the
part of any entity that talks to us in semi-coherent English, and to
fill in all the gaps ourselves. There was intelligence at work in the
exchanges you quoted, yes, but the intelligence was in your brain, not
in the computer.

Before natural language is worth doing, you need to have a program
that does some nontrivial computation in the first place. My
suggestion is visual/spatial modeling of some form (such as the
virtual worlds stuff Novamente is doing), but _something_. Otherwise
you're just setting a trap for yourself.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60726582-557b6e


Re: [agi] Can humans keep superintelligences under control -- can superintelligence-augmented humans compete

2007-11-03 Thread Richard Loosemore

Jiri Jelinek wrote:

If "humans" take
advantage of the ability to enhance their own intelligence up to the
same level as the AGI systems, the amount of "dependence" between the
two groups will stay exactly the same, for the simple reason that there
will not be a sensible distinction between the two groups.


In order to keep up with AGIs, AGI improvements would likely have to
be significantly delayed - which is IMO unlikely to happen.
We will probably:
a) stay much less flexible for long enough to get used to of all kinds
of convenient AGI services and
b) decide not to pursue the never ending race and enjoy the life instead.
People look for simple ways how to enjoy life. Most don't want to
think hard. They want to have fun.
Theoretically, we could keep up. Practically, IMO no way.


I was speaking broadly, so I half agree a half disagree.

I mean "keep up with AGIs" in the broad sense of the human species as a 
whole having the option to choose an intelligence level equal to the 
AGIs.  If ten people are always early adopters, getting the latest, 
highest level within minutes of it being available, this counts as the 
species "keeping up" because anyone *could* have done the same thing, 
even if only ten actually did it on the first day it was available.


People will want to enjoy life:  yes.  And they should, of course.  But 
so, of course, will the AGIs.  With the right design, they would be 
coming down to our level often too.  There is nothing magically more 
"enjoyable" about being superintelligent:  it's just one of the options 
to explore in a full life.


Personally, I want to spend just a small amount of time as a 
Neanderthal, so I can find out what it is like to be inside Eliezer's 
brain.  ;-)


[:-)  Heck, you have to remember that I have special status: for me, it 
is always open season as far as EY is concerned].





Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60722534-be6224


Re: [agi] popularizing & injecting sense of urge

2007-11-03 Thread Richard Loosemore

Matt Mahoney wrote:

--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:


Matt,


You contribute to AGI every time you use gmail and add to Google's

knowledge

base.

Then one would think Google would already be a great AGI.
Different KB is IMO needed to learn concepts before getting ready for NL.


Google does not yet have enough computing power for AGI.


Unjustified assertion:  it depends on what type of AGI you are talking 
about and what its resource requirements are.   Nobody knows what the 
resource requirements are for AGI designs that have not yet been specified.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60720648-5eaa40


Re: [agi] popularizing & injecting sense of urgenc

2007-11-03 Thread Richard Loosemore

Matt Mahoney wrote:

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
Example 4:  "Each successive generation gets smarter, faster, and less 
dependent on human cooperation."  Absolutely not true.  If "humans" take 
advantage of the ability to enhance their own intelligence up to the 
same level as the AGI systems, the amount of "dependence" between the 
two groups will stay exactly the same, for the simple reason that there 
will not be a sensible distinction between the two groups.


So your answer to my question "do you become the godlike intelligence that
replaces the human race?" is "yes"?


Not correct: the answer is "no" because you used the inappropriate word 
"replace" in the above sentence.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60719564-921472


Re: [agi] Connecting Compatible Mindsets

2007-11-03 Thread Bryan Bishop
To get such a database going, I would recommend that it 
be "self-organizing" so that there are no hardcoded constraints as to 
categorization. This will let the territory map itself.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60717866-f78e25


Re: [agi] Connecting Compatible Mindsets

2007-11-03 Thread Russell Wallace
Or could try http://test.canonizer.com/ which is still in beta, but
was pretty much designed for this sort of thing.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60717765-86d5f6


Re: [agi] Nirvana? Manyana? Never!

2007-11-03 Thread Richard Hollerith
BillK writes:
>>   Forget, for the moment, what you think is
>> possible - if you could have anything you wanted, is this the end you
>> would wish for yourself, more than anything else?
>
>Absolute Power over others and being worshipped as a God would be neat as well.
>
>Getting a dog is probably the nearest most humans can get to this.

Is this a joke?

Sometimes certain people can moralize too much or be too serious
all the time.  Could it be that this is a reaction against
someone being too serious or too moralistic in the past and your
wanting to tweak their nose a bit and maybe get them to loosen up
a little?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60716271-cf57cf


Re: [agi] NLP + reasoning?

2007-11-03 Thread William Pearson
On 02/11/2007, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> On Fri, Nov 02, 2007 at 12:56:14PM -0700, Matt Mahoney wrote:
> > --- Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> > > On Oct 31, 2007 8:53 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > > > Natural language is a fundamental part of the knowledge
> > > base, not something you can add on later.
> > >
> > > I disagree. You can start with a KB that contains concepts retrieved
> > > from a well structured non-NL input format only, get the thinking
> > > algorithms working and then (possibly much later) let the system to
> > > focus on NL analysis/understanding or build some
> > > NL-to-the_structured_format translation tools.
> >
> > Well, good luck with that.  Are you aware of how many thousands of times 
> > this
> > approach has been tried?  You are wading into a swamp.  Progress will be 
> > rapid
> > at first.
>
> Yes, and in the first email I wrote, that started this thread, I stated,
> more or less: "yes, I am aware that many have tried, and that its a
> swamp, and can anyone elucidate why?"  And, so far, no one as been able
> to answer that question, even as they firmly assert that surely it is a
> swamp. Nor has anyone attempted to posit any mechanisms that avoid that
> swamp, other than thought bubbles that state things like "starting from
> a clean slate, my system will be magic".
>

Here is my take on why I think it is a swamp.

I hypothesize natural language has the same expressiveness as a
recursive enumerable langauges [1]. Which means you need a machine
from the space of Turing machines to recognise all possible strings.
Further on from this, natural language also evolves in time, which
means you need to move through the space of Turing machines in order
to find programs that correctly parse it.

Moving through the space of Turing machines is fundementally
experimental (you can move through subspaces of it such as deciders[2]
with proofs, but that limits you to not being able to recognise some
strings). Experimenting in the space of Turing Machines can lead to
deleterious programs to the system being created. So creating a system
that has a stable(ish) goal whilst experimenting is a necessary
precursor to trying to solve the NL problem.

All these statements assume memory bounded versions of these things,
and are tentative until I can find theories that cope with this.

  Will Pearson

[1] http://en.wikipedia.org/wiki/Recursively_enumerable_language
[2] http://en.wikipedia.org/wiki/Decider

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60712161-687415


Re: [agi] Connecting Compatible Mindsets

2007-11-03 Thread YKY (Yan King Yin)
On 11/3/07, Jiri Jelinek <[EMAIL PROTECTED]> wrote:
> A problem with AGI developers is that they too often disagree on what
> the best AGI approach is - which prevents or complicates
> collaboration. I was once planning to develop an online app where
> users could add a record about their "AGI under
> architecture/development" and choose from (+ add to) a list of
> project-characteristics broken to categories, so other developers
> could visit and quickly get a good sense who is doing what and how
> similar are those approaches to their best-AGI-approach-opinion.
> Unfortunately, with all the projects I'm working on, I'm unlikely to
> get to it in foreseeable future. If something like that already
> exists, I would like to know. I saw few hardcoded lists but it would
> be great to have something what users could update. Maybe we would
> then be surprised how many people "work on AGI". Most of it would
> probably be junk projects or just some way-too-general ideas in the
> air, but the main purpose would be to connect people with similar AGI
> mindsets. Many AGI developers seem to be on their own. Knowing about
> one more developer with highly compatible AGI mindset can help to keep
> ideas in motion and make a difference.
Yes, that's a great idea.  I actually have an empty wiki waiting to be
used.  If you don't mind, we could use it for this purpose:
http://www.mogoo.com/~aiwiki/aiwiki/index.php/Main_Page

Or if AGIRI can provide such a place, would be nice too.

YKY

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=60692866-432c23