RE: [agi] AI and computation (was: The Next Wave)

2003-01-12 Thread Ben Goertzel
 Once again, the interesting question is not Is NARS a TM?, but
 Is NARS a
 TM with respect to problem P?  If the problem is To answer
 Ben's email on
 `AI and compuation', then the system is not a TM (though it may
 be a TM in
 many other senses).  For this reason, to discuss the computability and
 computational complexity of P because meaningless for such a system.

 Pei

I'm sorry but I still don't understand exactly what you mean by Is computer
program-instance X a TM with respect to problem P

Is this different from Is computer program-instance X, at time t, a TM with
respect to problem P presented to it at time t?

Because it seems like any computer program-instance X, at a given point in
time, is modelable as a TM with respect to a problem P presented to it at
time t.

To get more concrete, suppose we carry out the following experiment:

Take a specific NARS instance in a particular state.  Call it X.  Give it a
query in a specific format, which asks it to sort a list of n numbers.
Then, calculate the amount of time X takes before it delivers a correct
answer to the question (a correct sorting of the list) in a specified
format.

We can replicate the NARS instance X on  many different machines, hence we
can make a statistical study of this.  With insanely machines, we could try
it for every possible list of n numbers.

By carrying out this experiment, we could calculate the worst case and
average case time complexity of this particular NARS instance as a sorting
algorithm.

This seems to me to be discussing the computational complexity of a problem
for NARS, and it doesn't seem to be meaningless...

The conceptual problem here is the particular state assumption, right?
You can't get rid of it by averaging over all particular states, all
particular environments in the history of the system, whatever -- because
you wouldn't know what realistic pdf to use in doing the
weighted-averaging...

-- Ben G








-- Ben


---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI and computation (was: The Next Wave)

2003-01-11 Thread Shane Legg
Pei Wang wrote:

In my opinion, one of the most common mistakes made by people is to think AI
in terms of computability and computational complexity, using concepts like
Turing machine, algorithm, and so on.  For a long argument, see
http://www.cis.temple.edu/~pwang/551-PT/Lecture/Computation.pdf. Comments
are welcome.


It's difficult for me to attack a specific point after reading
through your paper because I find myself at odds with your views
in many places.  My views seem to be a lot more orthodox I suppose.

Perhaps where our difference is best highlighted is in the
following quote that you use:

   “something can be computational at one level,
but not at another level” [Hofstadter, 1985]

To this I would say: Something can LOOK like computation
at one level, but not LOOK at computation at another level.
Nevertheless it still is computation and any limits due to
the fundamental properties of computation theory still apply.

Or to use an example from another field: A great painting
involves a lot more than just knowledge of the physical
properties of paint.  Nevertheless, a good painter will know
the physical properties of his paints well because he knows
that the product of his work is ultimately constrained by these.

That's one half of the story anyway; the other part is that I
believe that intelligence is definable at a pretty fundamental
level (i.e. not much higher than the concept of universal Turing
computation) but I'll leave that part for now and focus on this
first issue.

Shane

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]


Re: [agi] AI and computation (was: The Next Wave)

2003-01-11 Thread Ben Goertzel

Shane Legg wrote, responding to Pei Wang:
 Perhaps where our difference is best highlighted is in the
 following quote that you use:
 
 “something can be computational at one level,
  but not at another level” [Hofstadter, 1985]
 
 To this I would say: Something can LOOK like computation
 at one level, but not LOOK at computation at another level.
 Nevertheless it still is computation and any limits due to
 the fundamental properties of computation theory still apply.

Shane, i think you and pei are  using different language to say very
similar things...

It seems to me that NARS, Novamente, and any other programs that run on
Turing machine hardware (like contemporary computers) CAN be analyzed in
terms of computation theory.  The question is, the extent to which this
is a USEFUL point of view.   There may, for some programs, be
noncomputational perspectives that are more useful.

For example, suppose we have a program that simulates a stochastic or
quantum process.  It may be more convenient to view this program in
terms of randomness or quantum dynamics than in terms of strict Turing
computation.  This view may explain more about the high level abstract
behavior of the program.  But still at the low level there is an
explanation for the program in terms of computing theory.

This is a special case of the general observation that: Often, in a
complex system, the patterns observable in the system at a coarse level
of observation, are not useful patterns in the system at a fine level of
observation...

It may be more convenient to think about and study an AGI program in a
noncomputational way ... if one is looking at the overall behaviors 
structures of the program ... but if one wants to look at the EXACT
actions taken by the system and understand them, one has got to take the
computational point of view and look at the code and its effect on
memory and processor...

 That's one half of the story anyway; the other part is that I
 believe that intelligence is definable at a pretty fundamental
 level (i.e. not much higher than the concept of universal Turing
 computation) but I'll leave that part for now and focus on this
 first issue.

Intelligence may be *definable* at that level -- and I'd argue that
Pei's definition of intelligence (roughly: doing complex
goal-achievement with limited knowledge and resources) could even be
formulated at that level.  

But the structures and dynamics needed to make intelligence happen under
reasonable space and time resource constraints -- THESE, I believe,
necessarily involve primary theoretical constructs VERY DIFFERENT FROM
computation theory, which is a theory of generic computational processes
not a theory that is very useful for the specific study of computational
processes that give rise to intelligence on an emergent level...

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI and computation (was: The Next Wave)

2003-01-11 Thread Pei Wang
Shane,

One issue that make that version of the paper controversial is the term
computation, which actually has two senses: (1) whatever computer
does,and (2) what defined as `computation' in computability theory.  In
the paper I'm using the second sense of the term.  (I'm revising the paper
to make this more clear.)

My argument, briefly speaking, is that it is quite possible, in the current
computer, to solve problems in such a way that is non-deterministic (i.e.,
context-sensitive) and open-ended (as in anytime algorithms).  Such a
process doesn't satisfy the definition of computation, doesn't follow a
predetermined algorithm, and has no fixed complexity.

To implement such a process requires no magic --- actually many existing
systems already go beyond computability theory, though few people has
realized it.  An concrete example is my NARS --- there is a demo at
http://www.cogsci.indiana.edu/farg/peiwang/NARS/ReadMe.html (you know that,
but some others don't). The system's capacity at the surface level cannot be
specified by computability theory, and the resource it spends on a question
is not fixed.

For that level issue, one way to see it is through the concept of virtual
machine.  We all know that at a low level computer only has procedural
language and binary data, but at a high level it has non-procedural language
(such as functional or logical languages) and decimal data.  Therefore, if
virtual machine M1 is implemented by virtual machine M2, the two may still
have quite different properties.  What I'm trying to do is to implement a
non-computing system on a computing one.

If you are still unconvinced, think about this problem: say the problem you
are trying to solve is to reply my current email. Is this problem
computable?  Do you follow an algorithm in solving it?  What is the
computational complexity of this process?

Pei

- Original Message -
From: Shane Legg [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, January 11, 2003 5:12 PM
Subject: Re: [agi] AI and computation (was: The Next Wave)


 Pei Wang wrote:
  In my opinion, one of the most common mistakes made by people is to
think AI
  in terms of computability and computational complexity, using concepts
like
  Turing machine, algorithm, and so on.  For a long argument, see
  http://www.cis.temple.edu/~pwang/551-PT/Lecture/Computation.pdf.
Comments
  are welcome.

 It's difficult for me to attack a specific point after reading
 through your paper because I find myself at odds with your views
 in many places.  My views seem to be a lot more orthodox I suppose.

 Perhaps where our difference is best highlighted is in the
 following quote that you use:

 “something can be computational at one level,
  but not at another level” [Hofstadter, 1985]

 To this I would say: Something can LOOK like computation
 at one level, but not LOOK at computation at another level.
 Nevertheless it still is computation and any limits due to
 the fundamental properties of computation theory still apply.

 Or to use an example from another field: A great painting
 involves a lot more than just knowledge of the physical
 properties of paint.  Nevertheless, a good painter will know
 the physical properties of his paints well because he knows
 that the product of his work is ultimately constrained by these.

 That's one half of the story anyway; the other part is that I
 believe that intelligence is definable at a pretty fundamental
 level (i.e. not much higher than the concept of universal Turing
 computation) but I'll leave that part for now and focus on this
 first issue.

 Shane

 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AI and computation (was: The Next Wave)

2003-01-11 Thread Ben Goertzel
Pei:

 For that level issue, one way to see it is through the concept
 of virtual
 machine.  We all know that at a low level computer only has procedural
 language and binary data, but at a high level it has
 non-procedural language
 (such as functional or logical languages) and decimal data.  Therefore, if
 virtual machine M1 is implemented by virtual machine M2, the two may still
 have quite different properties.  What I'm trying to do is to implement a
 non-computing system on a computing one.

Interestingly though, even if M1 and M2 are very different, bisimulation may
hold.

For example, NARS can simulate any Turing machine -- it has universal
computation power -- but this will often be a very inefficient simulation
(you need to use HOI with  maximal confidence and boolean strength) ..

The problem is that bisimulation, without taking efficiency into account, is
a pretty weak idea.  This is a key part of my critique of wolfram's
thinking...

ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



Re: [agi] AI and computation (was: The Next Wave)

2003-01-11 Thread Pei Wang

- Original Message -
From: Shane Legg [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, January 11, 2003 9:42 PM
Subject: Re: [agi] AI and computation (was: The Next Wave)


 Hi Pei,

  One issue that make that version of the paper controversial is the term
  computation, which actually has two senses: (1) whatever computer
  does,and (2) what defined as `computation' in computability theory.
In
  the paper I'm using the second sense of the term.  (I'm revising the
paper
  to make this more clear.)

 Ok, so just to be perfectly clear about this.  You maintain that a
 real computer (say my laptop here that I'm using) is able to do
 things that are beyond what is possible with a theoretical computer
 (say a Turing machine).  Is that correct?

Yes. See below.

 If so, then this would seem to be the key difference of opinion
 between us.

Right. Again let's use NARS as a concrete example.  It can answer questions,
but if you ask the same question twice to the system at different time, you
may get different answers. In this sense, there is no algorithm that takes
the question as input, and produces an unique answer as output. You may say
that there is still an algorithm (or many algorithms) in the system, which
take many other factors into account in producing answers, which I agree
(simply because that is how NARS is coded), but still, there is no single
algorithm that is soly responsible for the question-answering process, and
that is the point. The cooperations of many algorithms, under the influence
of many factors beside the current input, is not necessarily equivalent to
an algorithm, or a Turing machine, as defined in the Theory of Computation.
The main idea in Turing Computation is that the machine serves as a function
that maps each input uniquely into an output. Intelligence, with its
adaptivity and flexivity, should not been seen as such a fixed mapping.

  If you are still unconvinced, think about this problem: say the problem
you
  are trying to solve is to reply my current email. Is this problem
  computable?  Do you follow an algorithm in solving it?  What is the
  computational complexity of this process?

 I have no reason that I can think of to believe that a response to
 your email could not be generated by an algorithm.  Perhaps a big
 fancy one with a high computation complexity, but I don't see any
 reason why not.

I'm not asking about whether it could --- of course I can image an
algorithm that does nothing but take my email as input, and produce the
above reply of yours as output.  I just don't believe it is how your mind
works.  For one thing, to use an algorithm to solve a problem means, by
definition, if I repeat the question, you'll repeat the answer. Since I know
you in person, I'm sure you are more adaptive than that.  ;-)

Cheers,

Pei

 Cheers
 Shane


 ---
 To unsubscribe, change your address, or temporarily deactivate your
subscription,
 please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]



RE: [agi] AI and computation (was: The Next Wave)

2003-01-11 Thread Ben Goertzel


Pei wrote:
 Right. Again let's use NARS as a concrete example.  It can answer
 questions,
 but if you ask the same question twice to the system at different
 time, you
 may get different answers. In this sense, there is no algorithm that takes
 the question as input, and produces an unique answer as output.
 You may say
 that there is still an algorithm (or many algorithms) in the system, which
 take many other factors into account in producing answers, which I agree
 (simply because that is how NARS is coded), but still, there is no single
 algorithm that is soly responsible for the question-answering process, and
 that is the point.

Pei!!  I get the feeling you are using a very nonstandard definition of
algorithm !!

The cooperations of many algorithms, under the
 influence
 of many factors beside the current input, is not necessarily equivalent to
 an algorithm, or a Turing machine, as defined in the Theory of
 Computation.
 The main idea in Turing Computation is that the machine serves as
 a function
 that maps each input uniquely into an output. Intelligence, with its
 adaptivity and flexivity, should not been seen as such a fixed mapping.

No!!!

Consider a Turing machine with three tapes:

* input
* output
* internal state

Then the correct statement is that the Turing machine maps each (input,
internal state) pair into a unique output.

This is just like NARS.  If you know its input and its internal state, you
can predict its output.  (Remember, even if there is a quasi-random number
generator in there, this generator is actually a deterministic algorithm
whose output can be predicted based on its current state).

The mapping from NARS into a 3-tape Turing machine is more natural than the
mapping from NARS into a standard 1-tape Turing machine.

BUT, it is well-known that there is bisimulation between 3-tape and 1-tape
Turing machines.  This bisimulation result shows that NARS can be mapped
into a 1-tape Turing machine...

The bisimulation between 3-tape and 1-tape Turing machines is expensive, but
that only shows that the interpretation of NARS as a 1-tape Turing machine
is *awkward and unnatural*, not that it is impossible.


 I'm not asking about whether it could --- of course I can image an
 algorithm that does nothing but take my email as input, and produce the
 above reply of yours as output.  I just don't believe it is how your mind
 works.  For one thing, to use an algorithm to solve a problem means, by
 definition, if I repeat the question, you'll repeat the answer.
 Since I know
 you in person, I'm sure you are more adaptive than that.  ;-)

You should broaden your definition of algorithms to include algorithms with
memory.

This is standard in CS too -- e.g. the theory of stack automata...

You should say

to use a deterministic algorithm to solve a problem means, by
 definition, if I repeat the question and you have the same internal state
as the first time I asked the question, you'll repeat the answer.

Shane has a memory.  So does a simple stack automaton.

But they're both basically deterministic robots ;-)

[though Shane has more capability to amplify chance (i.e. too complex for
the observer to understand) fluctuations into big patterns than a simple
stack automaton, which is related with his greater degree of
consciousness...]


-- ben

---
To unsubscribe, change your address, or temporarily deactivate your subscription, 
please go to http://v2.listbox.com/member/?[EMAIL PROTECTED]