I am not wedded to any idea, but some things make a kind of sense. There may 
have been laws placed together, at random chance or by intelligent design for 
this section of reality we know as the hubble volume. Some design schema, or 
maybe just a multiverse that we evolved from. Either way, it is way above my 
intellectual pay grade, and my ability to influence-even in a psychological 
way. We have to define superintelligence because it is too vague a concept, We 
all have a rough notion on what it might look like, which always is much faster 
in the processing. Like sci fi writer called the computers of his day, (Arthur 
C. Clarke) high-speed morons. The VR addict thing will be when we turn inwards, 
because getting to space is too taxing on our energy. It is also possible to be 
Virch heads while traveling through space, because why not?


 Mergence of the high speed morons with ourselves, also seem plausible. I liken 
this not to Star Trek's The Borg, but close in concept to ancient biology where 
a separate bacterial life got repeated swallowed by another form of life for a 
meal, and then got more than it bargained for. Viola! (Wal-Lah!) a new form of 
life that, that occured when a bacteria swallowed a mitochondria, and thus, 
multicellular life. Call it synergy, serendipity, or God, but this could be 
occurring now, with roboto.


-----Original Message-----
From: Telmo Menezes <te...@telmomenezes.com>
To: everything-list <everything-list@googlegroups.com>
Sent: Fri, Sep 16, 2016 9:38 am
Subject: Re: Non-Evolutionary Superintelligences Do Nothing, Eventually

On Wed, Sep 14, 2016 at 12:20 AM, John Clark <johnkcl...@gmail.com> wrote:
> On Tue, Sep 13, 2016  Telmo Menezes <te...@telmomenezes.com> wrote:
>
>> >
>> In my "designed superintelligence" scenario, the entity is confronted
>>
>> w
>> ith a protection mechanism that was conceived by a lesser
>> intelligence.
>
>
> Yes, the most recent iteration of the Jupiter Brain was designed by
> something that was less intelligent than itself, but at least it had some
> intelligence, we were produced by Evolution which had no intelligence at
> all;

Hmm... I would say that evolution as a whole has no intelligence, in
the sense that it has no goal. It is a complexification process,
inherent to reality.

Locally (in the sense of, evolution applied to an existing species
like humans), it is not so clear to me that it has no intelligence. It
leads to better and better designs, in the context of a certain
environment and previous constraints.

> and yet even we don't all decide to become drug addicts.

Precisely because of the above "local" consideration.

> But I
> understand where the concern comes from, could drug addiction be the first
> signs of a very dangerous positive feedback loop?
>
> During most of human existence addiction was a non-issue, but then about
> 8000 BC alcoholic beverages were invented, but they were so dilute you'd
> really have to work at it to get into trouble. Then about 500 years ago
> distilled alcoholic beverages were invented and it became much easier to
> become an alcoholic. Today we have many drugs that are far more powerful
> than alcohol. Could the answer to the Fermi Paradox be that this trend will
> continue exponentially? Could the universe be full of ETs but they are all
> lotus eaters experiencing a billion year long orgasm and accomplishing
> nothing? Maybe. But maybe not, that scenario assumes absolutely nobody can
> resist taking the drug (or rather its electronic counterpart), not even
> those that fully understand what taking the drug will lead to. We're not as
> smart as a Jupiter Brains but most of us are smart enough to know that
> taking crack would be a bad idea.

Right, but my point is that what makes this a "bad idea" is our own
evolutionary context. Why is it a bad idea exactly? It becomes
tautological. It is a bad idea because it hurts your ability to pursue
goals dictated by evolution. Outside of evolution, why would an entity
not choose the easy way out?

>> >
>> if we want the designed AI to follow
>> certain rules, we are the ones setting the rules and we are the ones
>>
>> trying to prevent it from changing them.
>
>
> If you're successful in making an AI that can not change its basic goal
> structure then you've made a insane AI that will be of no use to us or to
> itself or to anything else. Asimov's three laws of robotics make for some
> very entertaining stories but they could never work in practice.

I agree.

> When people talk about making a friendly AI ( aka a slave AI) they are
> talking nonsense. It's nuts to think a AI  will always defer to humans and
> obey their every command no matter how much more intelligent it becomes than
> any member of the human species, and will continue to obey even when the AI
> becomes more intelligent than the entire species put together. It just isn't
> going to happen.

I agree completely.
What I am trying to do with my paper is to provide a formal argument
for this, using an angle that I have not quite found elsewhere. Of
course, I might be ignorant of related work. Or there could be a flaw
in my argument. These last points were my motivation to discuss this
as a working paper with interested people.

Telmo.

>   John K Clark
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at https://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at https://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/d/optout.

Reply via email to