On 9/14/06, Nick Hay <[EMAIL PROTECTED]> wrote:
In the paper by Chaitin that I quoted, it is informal. To save people finding
it, the quote reads, "...if one has ten pounds of axioms and a twenty-pound
theorem, then that theorem cannot be derived from those axioms."
This is also called Chaitin's heuristic principle which is formalised as the
the \delta_g function in the paper by Calude that you cite. You are right
though, to pursue this discussion we will need to be more precise about
definitions. Chaitin's informal heuristic about "pounds" of theorems and
axiom systems isn't enough.
This specific statement does contain a significant amount of information,
but it is "light" in the sense that is belongs to a simple class of provable
statements of the form (for all x) x=x. Indeed x=x would well be an axiom.
Without a formal definition of friendliness, it's hard to know exactly what it is
that we should even be trying to prove.
Something like multiplication in a sense ignores the state of the universe:
A number comes in, stuff happens, a number gets spat out the other side.
You can verify that the relationship between these two numbers obeys
some property. You don't have to take into account the world, it's a purely
"internal" problem.
I don't see how you could reduce the concept of friendliness in this way.
Even something as simple as taking your input and outputing it again
could be unfriendly if it means that you're passing on the secret codes
to start a nuclear war. Friendliness depends on what things mean in the
context of what's going on in the outside world.
In my view, thinking too much about whether one can prove that a system
is friendly or not is getting a bit ahead of ourselves. What we need first is
a formal definition of what friendly means. Then we can try to figure out
whether or not we can prove anything. I think we should focus on the
problem of definition first.
Shane
How is this weight defined, or is it informal?
In the paper by Chaitin that I quoted, it is informal. To save people finding
it, the quote reads, "...if one has ten pounds of axioms and a twenty-pound
theorem, then that theorem cannot be derived from those axioms."
This is also called Chaitin's heuristic principle which is formalised as the
the \delta_g function in the paper by Calude that you cite. You are right
though, to pursue this discussion we will need to be more precise about
definitions. Chaitin's informal heuristic about "pounds" of theorems and
axiom systems isn't enough.
In what precise sense does
"3242356630320032482384029350=3242356630330032482384029350"
lack a large amount of algorithmic information
This specific statement does contain a significant amount of information,
but it is "light" in the sense that is belongs to a simple class of provable
statements of the form (for all x) x=x. Indeed x=x would well be an axiom.
> The key question is, what would a friendliness theorem be about?
I don't know what kinds of theorems you would prove about these systems.
Without a formal definition of friendliness, it's hard to know exactly what it is
that we should even be trying to prove.
> Would it be about a function where the data that goes into and out of
> it aren't important? For example, "x=x". Or would it be a theorem about
> the complete system of the AGI *and* universe that it interacts with?
A theorem about the complete system + the universe can be "light" if
it's general e.g. proving a multiplication program works in any
universe where the computer stays intact. In this case you don't have
to have a simple theory describing this particular location in our
universe. Unfortunately I can't think of a less trivial example just yet.
Something like multiplication in a sense ignores the state of the universe:
A number comes in, stuff happens, a number gets spat out the other side.
You can verify that the relationship between these two numbers obeys
some property. You don't have to take into account the world, it's a purely
"internal" problem.
I don't see how you could reduce the concept of friendliness in this way.
Even something as simple as taking your input and outputing it again
could be unfriendly if it means that you're passing on the secret codes
to start a nuclear war. Friendliness depends on what things mean in the
context of what's going on in the outside world.
In my view, thinking too much about whether one can prove that a system
is friendly or not is getting a bit ahead of ourselves. What we need first is
a formal definition of what friendly means. Then we can try to figure out
whether or not we can prove anything. I think we should focus on the
problem of definition first.
Shane
This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/[EMAIL PROTECTED]