On 28 July 2010 19:56, Jan Klauck <jkla...@uni-osnabrueck.de> wrote:

> Ian Parker wrote
>
> > What we would want
> > in a "*friendly"* system would be a set of utilitarian axioms.
>
> "If we program a machine for winning a war, we must think well what
> we mean by winning."
>

I wasn't thinking about winning a war, I was much more thinking about sexual
morality and men kissing.

"Winning" a war is achieving your political objectives in the war. Simple
definition.

>
> (Norbert Wiener, Cybernetics, 1948)
>
> > It is also important that AGI is fully axiomatic
> > and proves that 1+1=2 by set theory, as Russell did.
>
> Quoting the two important statements from
>
>
> http://en.wikipedia.org/wiki/Principia_Mathematica#Consistency_and_criticisms
>
> "Gödel's first incompleteness theorem showed that Principia could not
> be both consistent and complete."
>
> and
>
> "Gödel's second incompleteness theorem shows that no formal system
> extending basic arithmetic can be used to prove its own consistency."
>
> So in effect your AGI is either crippled but safe or powerful but
> potentially behaves different from your axiomatic intentions.
>

You have to state what your axioms are. Gödel's theorem does indeed state
that. You do have to make therefore some statements which are unprovable.
What I was in fact thinking in terms of was something like Mizar.
Mathematics starts off with simple ideas. The axioms which we cannot prove
should be listed. You can't prove them. Let's list them and all the
assumptions.

If we have a Mizar proof we assume things, and argue the case for a theorem
on what we have assumed. What you should be able to do is get from the ideas
of Russell and Bourbaki to something really meaty like Fermat's Last
Theorem, or the Riemann hypothesis.

The organization of Mizar (and Alcor which is a front end) is very much a
part of AGI. Alcor has in fact to do a similar job to Google in terms of a
search for theorems. Mizar though is different from Google in that we have
lemmas. You prove something by linking the lemmas up.

Suppose I were to search for "Riemann Hypothesis". Alcor should give me all
the theorems that depend on it. It should tell me about the density of
primes. It should tell me about the Goldbach conjecture, proved by Hardy and
Littlewood to depend on Riemann.

Google is a step towards AGI. An Alcor which could produce chains
of argument and find lemmas would be a big step to AGI.

Could Mizar contain knowledge which was non mathematical? In a sense it
already can. Mizar will contain Riemanian differential geometry. This is
simply a piece of pure maths. I am allowed to make a conjecture, an axiom if
you like that Riemann's differential geometry is in the shape of General
relativity the way in which the Universe works. I have stated this as
an unproven assertion, one that has been constantly verified experimentally
but unproven in the mathematical universe.

>
> > We will need morality to be axiomatically defined.
>
> As constraints, possibly. But we can only check the AGI in runtime for
> certain behaviors (i.e., while it's active), but we can't prove in
> advance whether it will break the constraints or not.
>
> Get me right: We can do a lot with such formal specifications and we
> should do them where necessary or appropriate, but we have to understand
> that our set of guaranteed behavior is a proper subset of the set of
> all possible behaviors the AGI can execute. It's heuristics in the end.
>

The heuristics could be tested in an off line system.

>
> > Unselfishness going wrong is in fact a frightening thought. It would in
> > AGI be a symptom of incompatible axioms.
>
> Which can happen in a complex system.
>

Only if the definitions are vague. The definition of happiness is vague.
Better to have a system based on "*democracy*" in some form or other. The
beauty of Matt's system is that we would remain ultimately in charge of the
system. We make rules such as no imprisonment without trial, minimum of laws
restriction personal freedom (men kissing), separation of powers in the
judiciary and executive and the reolution of disputed without violence.
These are I repeat *not* fundamental philosophical principles but rules
which our civilization has devised and have been found to work.

I have mentioned before that we could have more than 1 AGI system. All the "
*derived*" principles would be tested off line on another AGI system.

>
> > Suppose system A is monitoring system B. If system Bs
> > resources are being used up A can shut down processes in A. I talked
> > about computer gobledegook. I also have the feeling that with AGI we
> > should be able to get intelligible advice (in NL) about what was going
> > wrong. For this reason it would not be possible to overload AGI.
>
> This isn't going to guarantee that system A, B, etc. behave in all
> ways as intended, except they are all special purpose systems (here:
> narrow AI). If A, B etc. are AGIs, then this checking is just an
> heuristic, no guarantee or proof.
>
> > In a resource limited society freeloading is the biggest issue.
>
> All societies are and will be constrained by limited resources.
>

I can perhaps change the topic a bit here. A singular society will not be
limited in terms of labour. It will be limited in terms of natural
resources. The true AGI singularity is some distance away. There is also a
robotic singularity. This simply involves human manual dexterity. It does *not
necessarily* involve AGI. This singularity is (probably) a bare 10 years
away. Obama has cancelled NASA's plans for going to the Moon in 2020. We
should in fact be thinking about building an O'Neill space colony in 2020
using Von Neumann technology - which *only* requires manual dexterity. No,
what we should be thinking of is producing a robot from asteroid material
and sunlight. This would be capable of building enormous structures.

>
> > The fundamental fact about Western crime is that very little of it is
> > to do with personal gain or greed.
>
> Not that sure whether this statement is correct. It feels wrong from
> what I know about human behavior.
>

It is contrary to what we have assumed. In crime just over 50% is due
directly to substance abuse.

>
> >> Unselfishness gone wrong is a symptom, not a cause. The causes for
> >> failed states are different.
> >
> > Axiomatic contradiction. Cannot occur in a mathematical system.
>
> See above...
>

See my arguments above.


  - Ian Parker

>
>

> -------------------------------------------
> agi
> Archives: https://www.listbox.com/member/archive/303/=now
> RSS Feed: https://www.listbox.com/member/archive/rss/303/
> Modify Your Subscription:
> https://www.listbox.com/member/?&;
> Powered by Listbox: http://www.listbox.com
>



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=8660244-6e7fb59c
Powered by Listbox: http://www.listbox.com

Reply via email to