Loosemore said:
"But now ... suppose, ... that there do not exist ANY 3-sex cellular 
automata in which there are emergent patterns equivalent to the glider 
and glider gun.  ...Conway ... can search through the entire space of 
3-sex automata..., and he will never build a  system that satisfies his 
requirement.

This is the boxed-in corner that I am talking about.  We decide that 
intelligence must be built with some choice of logical formalism, plus 
heuristics, and we assume that we can always keep jiggling the 
heuristics until the system as a whole shows a significant degree of 
intelligence.  But there is nothing in the world that says that this is 
possible.

...mathematics cannot possibly tell you that this part of the space does 
not contain any solutions.  That is the whole point of complex systems, 
n'est pas?  No analysis will let you know what the global properties are 
without doing a brute force exploration of (simulations of) the system."

------------------------------------------------------------------------
But we can invent a 'mathematics' or a program that can.  By understanding that 
a model is not perfect, and recognizing that references may not mesh perfectly, 
a program can imagine other possibilities and these possibilities can be based 
on complex interrelations built between feasible strands.  Approximations do 
not need to be limited to weighted expressions, general vagueness or something 
like that.  From this point it is just a matter of devising a 'mathematical' - 
a programmed - system to discover actual feasibilities.  The Game of Life did 
not solve the contemporary problem of AI because it was biased to create a 
chain of progression and it wasted the memory of those results that did not 
immediately result in a payoff but may have fit into other developments.  And 
it did not explore the relative reduction space.  The reconciliation between 
the study of possible splices of previously seen chains of products and 
empirical feasibility may be an open
 ended process but it could be governed by a program.  It may be AI-complete 
but the sub tasks to run a search from imaginative feasibility to empirical 
feasibility can be governed by logic (even though it would be open ended 
AI-complete search.)  

I agree with what you are saying in the broader sense, but I do believe that 
the research problem could be governed by a logical system, although it would 
require a great many resources to search the Cantorian diagonal infinities 
space of possible arrangements of relative reductions.  Relative reduction 
means that in order to discover the nature of certain mathematical problems we 
may (usually) have to use reductionism to discover all of the salient features 
that would be necessary to create a mathematical algorithm to produce the range 
of desired outputs.  But the system of reductionist methods has to be relative 
to the features of the system; a set of elements cannot be taken for granted, 
you have to discover the pseudo-elements (or relative elements) of the system 
relative to the features of the problem.

Jim Bromer





----- Original Message ----
From: Richard Loosemore <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Tuesday, June 24, 2008 9:02:31 PM
Subject: Re: [agi] Approximations of Knowledge

Abram Demski wrote:
>>> I'm still not really satisfied, though, because I would personally
>>> stop at the stage when the heuristic started to get messy, and say,
>>> "The problem is starting to become AI-complete, so at this point I
>>> should include a meta-level search to find a good heuristic for me,
>>> rather than trying to hard-code one..."
>> And at that point, your lab and my lab are essentially starting to do
>> the same thing.  You need to start searching the space of possible
>> heuristics in a systematic way, rather than just pick a hunch and go
>> with it.
>>
>> The problem, though, is that you might already have gotten yourself into
>> a You Can't Get There By Starting From Here situation.  Suppose your
>> choice of basic logical formalism, and knowledge representation format
>> (and the knowledge acquisition methods that MUST come along with that
>> formalism) has boxed you into a corner in which there does not exist any
>> choice of heuristic control mechanism that will get your system up into
>> human-level intelligence territory?
> 
> If the underlying search space was sufficiently general, we are OK,
> there is no way to get boxed in except by the heuristic.

Wait:  we are not talking about the same thing here.

Analogous situation.  Imagine that John Horton Conway is trying to 
invent a cellular automaton with particular characteristics - say, he 
has already decided that the basic rules MUST show the global 
characteristic of having a thing like a glider and a thing like a glider 
gun.  (This is equivalent to us saying that we want to build a system 
that has the particular characteristics that we colloquially call 
'intelligence', and we will do it with a system that is complex).

But now Conway boxes himself into a corner:  he decides, a priori, that 
the cellular automaton MUST have three sexes, instead of the two sexes 
that we are familiar with in Game of Life.  So three states for every 
cell.  But now (we will suppose, for the sake of the argument), it just 
happens to be the case that there do not exist ANY 3-sex cellular 
automata in which there are emergent patterns equivalent to the glider 
and glider gun.  Now, alas, Conway is up poop creek without an 
instrument of propulsion - he can search through the entire space of 
3-sex automata until the end of the universe, and he will never build a 
system that satisfies his requirement.

This is the boxed-in corner that I am talking about.  We decide that 
intelligence must be built with some choice of logical formalism, plus 
heuristics, and we assume that we can always keep jiggling the 
heuristics until the system as a whole shows a significant degree of 
intelligence.  But there is nothing in the world that says that this is 
possible.  We could be in exactly the same system as our hypothetical 
Conway, trying to find a solution in a part of the space of all possible 
systems in which there do not exist any solutions.

The real killer is that, unlike the example you mention below, 
mathematics cannot possibly tell you that this part of the space does 
not contain any solutions.  That is the whole point of complex systems, 
n'est pas?  No analysis will let you know what the global properties are 
without doing a brute force exploration of (simulations of) the system.


Richard Loosemore



> This is what the mathematics is good for. An experiment, I think, will
> not tell you this, since a formalism can cover almost everything but
> not everything. For example, is a given notation for functions
> Turing-complete, or merely primitive recursive? Primitive recursion is
> amazingly expressive, so I think it would be easy to be fooled. But a
> proof of Turing-completeness will suffice.





-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



      


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to