--- Jiri Jelinek <[EMAIL PROTECTED]> wrote:

> > AGI does not need promoting.
> 
> Considering
> a) how important AGI is
> b) how many dev teams seriously work on AGI
> c) how many investors are willing to spend good money on AGI R&D
> I believe AGI does need promoting. And it's IMO similar with the
> immortality research some of the Novamente folks are involved in. It's
> just unbelievable how much money (and other resources) are being used
> for all kinds of nonsense/insignificant projects worldwide. I wish
> every American gave just $1 for AGI and $1 for immortality research.
> Imagine what this money could for all of us (if used wisely).
> Unfortunately, people will rather spend the money for their popcorn in
> a cinema.

You contribute to AGI every time you use gmail and add to Google's knowledge
base.

> > We should be more concerned about the risks of AGI.  When humans can make
> > machines smarter than themselves, then so can those machines.  The result
> will
> > be an intelligence explosion.  http://mindstalk.net/vinge/vinge-sing.html
> 
> I'll check your links later, but generally, we can avoid many risks by
> controlling AGI's goal system - which should not be that difficult if
> the AGI is well designed.

It should not be difficult to write a program that tests whether other
programs halt -- until you study the problem.  It is not you that is designing
the AGI.  It is another AGI.  And it is not designing -- it is experimenting
with an existing design.

> 
> > The problem is that humans cannot predict -- and therefore cannot control
> --
> > machines that are vastly smarter.
> 
> "cannot predict" - I agree.
> "cannot control" - I disagree. Controlling goals, subgoals, and the
> real world impact (possibly using independent narrow AI tools) will do
> the trick.

Prediction, control, and modeling are equivalent.  You cannot manage a team if
you don't know what they are doing, or if you can't trust them.

> >  Could your consciousness exist in a machine with different goals or
> different
> > memories?
> 
> IMO no.

Could it exist in a machine with the same goals and memories?  What if they
were only a little different?  When you start asking questions like this you
introduce a conflict between your evolutionary programmed immutable belief in
consciousness and free will, and your logic which says they don't exist.

> > Do you become the godlike intelligence that replaces the human
> > race?
> 
> Godlike intelligence? :) Ok, here is how I see it: If we survive, I
> believe we will eventually get plugged into some sort of pleasure
> machine and we will not care about intelligence at all. Intelligence
> is a useless tool when there are no problems and no goals to think
> about. We don't really want any goals/problems in our minds.
> Basically, the goal is to not have goal(s) and safely experience as
> intense pleasure as the available design allows for as long as
> possible. AGI could be eventually tasked to take care of all what that
> takes + search for the system improvements and things that an altered
> human mind could consider being even better than feelings as we know
> them now. Many might think that they love someone so much that they
> would not tell him/her "bye" and get plugged into a pleasure machine,
> but I'm pretty sure they would change their mind after the first trial
> of a well designed device of that kind. That's how I currently see the
> best possible future. Some people, when talking about advanced aliens,
> are asking "Where are they?".. Possibly, they are in such a pleasure
> machine and don't really care about anything, feeling like true gods
> in a world where concepts like intelligence are totally meaningless.

"Godlike intelligence" is the best way I can describe the future of AGI.  The
human brain has 10^15 synapses, or 10^9 more entropy than the E. Coli genome. 
The quantum state of the universe has 10^122 bits of entropy.  You can no more
comprehend the limits of computation than the bacteria in your gut could
comprehend human life.

Your goals are selected by evolution.  There is a good reason why you fear
death and then die.  You want what is good for the species.  We could
circumvent our goals through technology, for example, uploading our brains
into computers and reprogramming them.  When a rat can stimulate its nucleus
accumbens by pressing a lever, it will forgo food, water, and sleep until it
dies.  We worry about AGI destroying the world by launching 10,000 nuclear
bombs.  We should be more worried that it will give us what we want.


-- Matt Mahoney, [EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=59921792-dad01d

Reply via email to