Re: [agi] Re: Superseding the TM [Was: How valuable is Solmononoff...]

2007-11-09 Thread William Pearson
On 09/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:
 On 11/8/07, William Pearson [EMAIL PROTECTED] wrote:
  On 08/11/2007, Jef Allbright [EMAIL PROTECTED] wrote:

   This discussion reminds me of hot rod enthusiasts arguing passionately
   about how to build the best racing car, while denigrating any
   discussion of entropy as outside the practical.
  
 
  You are over stating the case majorly.

 Yes, if I had claimed they were analogous.  I said merely that it
 reminds me of, in the simple sense that those who don't grok theory
 tend to denigrate it.

Fair enough. I've seen some of that trying to find a PhD supervisor.
Not that I fully grok the theory myself, but I feel it is important to
try.


  UAI so far has yet to prove its usefulness. It is just a
  mathematical formalism that is incomplete in a number of ways.
 
  1) Doesn't treat computation as outputting to the environment, thus
  can have no concept of saving energy or avoiding inteference with
  other systems by avoiding computation. The lack of energy saving means
  it is not valid model for solving the problem of being a
  non-reversible intelligence in an energy poor environment (which
  humans are and most mobile robots will be).

 This is intriguing, but unclear to me.  Does it entail anything
 consequential beyond the statement that Solomonoff induction is
 incomputable?  Any references?


Not that I have found, I tried to write something about it a while
back. But that is not very appropiate to this audience.

I'll try and write something more coherent, with an example in the
morning. The following paragraph is just food for thought.

Consider the energy usage of the system as an additional hidden output
on the output tape, that AIXI doesn't know to vary, and can't vary
unless it changes the amount of computation it does and knows that it
is doing so, so can't update its prior of what strategies it should
persue. My thinking is that it would not be able to find the optimum
strategy, even given infinite resources, because the problem is not
lack of computational resources, but what effect its computation has
on the environment.



  2) It is based on Sequential Interaction Machines, rather than
  Multi-Stream Interaction Machines, which means it might lose out on
  expressiveness as talked about here.
 
  http://www.cs.brown.edu/people/pw/papers/bcj1.pdf

 I began to read the paper, but it was mostly incomprehensible to me.

I'll grant you that, I've only brushed the surface of that paper, but
others I have found have been more readable and have cited that one as
discussing the MIM systems, so I thought I would include it for
correctness.

Some decent slides are here

http://www.cs.brown.edu/people/dqg/Papers/wurzburg.ps

I actually came across this relatively recently, whilst reading about it here.

http://lambda-the-ultimate.org/node/203

 Which you may find interesting.
 The terms and grammar were legitimate, but I couldn't translate the
 claims to any workable model of reality -- I actually expected to see
 Sokal mentioned as co-author -- but upon googling I found the
 following paper in response, which in comparison was quite
 comprehensible.

 http://www.macs.hw.ac.uk/~greg/publications/cm.cj07.pdf

 I'd be interested in knowing whether (and how) you think the response
 paper gets it wrong.


The first section of the response in section 5.2, with the all the
teletype conversation being recorded, I would agree that a PTM isn't
calculating anything different, it is however doing a different real
world job.  It depends whether you want to define computation as a)
calculation or b) a real world job computers can do. I'm going with
what real jobs computers do, as that is more important for practical
AI.

The second section is a bit odd, their definition of the TM with three
tapes seems very similar to the standard definition of the SIM PTM.
Anyway I shall not try and delve into the exact differences. My
problem with this section is while it may be possible to have some TM
that can do the same thing as PTM, it is going above and beyond what
we currently call equivalent to a Turing Machine. What we currently
call equivalent to a Turing Machine is everything that that can be
made to recognise the Recursively Enumerable languages.

http://en.wikipedia.org/wiki/Recursively_enumerable_language

These only require functions (fixed mappings between input and
output), and PTMs go beyond this into dealing with mappings that
change, e.g. learning etc. So while a TM might be able to go into
dealing with streams efficiently, lambda calculus is not be able to do
the same thing, despite being supposedly computationally equivalent,
because it is purely functional. So something needs to be changed in
our theory of universal computability, somewhere.

I would be satisfied to say PTM can do things that lamda calculus
can't, and that the theory of how to do those things is important for
AI. If you don't want to alter the concept of computability as it

Re: [agi] Re: Superseding the TM [Was: How valuable is Solmononoff...]

2007-11-09 Thread Bryan Bishop
On Friday 09 November 2007 20:01, John G. Rose wrote:
 I already have some basics of merging OOP and Group/Category Theory.
 Am working on some ideas on jamming, or I should say intertwining
 automata in that. The complexity integration still trying to figure
 out... trying to stay as far from uncomputable as possible :)

But it sounds like you have only vague ideas of how to add complexity 
and so on. How do you actually program if you are operating on such an 
abstract level?*

* Would be interesting to do abstract programming and claim that others 
in the future will be able to apply this, let them implement it.

- Bryan

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63788500-031ff7


RE: [agi] Re: Superseding the TM [Was: How valuable is Solmononoff...]

2007-11-09 Thread John G. Rose
 From: Bryan Bishop [mailto:[EMAIL PROTECTED]
  I already have some basics of merging OOP and Group/Category Theory.
  Am working on some ideas on jamming, or I should say intertwining
  automata in that. The complexity integration still trying to figure
  out... trying to stay as far from uncomputable as possible :)
 
 But it sounds like you have only vague ideas of how to add complexity
 and so on. How do you actually program if you are operating on such an
 abstract level?*
 
 * Would be interesting to do abstract programming and claim that others
 in the future will be able to apply this, let them implement it.
 

Just a center out, engineering approach - simple to complex - with abstract
representation. Automata integration probably will be the key to complexity
programming - those darn CA's. But CA representation is bad or something
wrong with it and I don't know what it is...

John

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=63790936-3e2f66