Re: [agi] Approximations of Knowledge

2008-06-25 Thread Jim Bromer
Loosemore said:
"But now ... suppose, ... that there do not exist ANY 3-sex cellular 
automata in which there are emergent patterns equivalent to the glider 
and glider gun.  ...Conway ... can search through the entire space of 
3-sex automata..., and he will never build a  system that satisfies his 
requirement.

This is the boxed-in corner that I am talking about.  We decide that 
intelligence must be built with some choice of logical formalism, plus 
heuristics, and we assume that we can always keep jiggling the 
heuristics until the system as a whole shows a significant degree of 
intelligence.  But there is nothing in the world that says that this is 
possible.

...mathematics cannot possibly tell you that this part of the space does 
not contain any solutions.  That is the whole point of complex systems, 
n'est pas?  No analysis will let you know what the global properties are 
without doing a brute force exploration of (simulations of) the system."


But we can invent a 'mathematics' or a program that can.  By understanding that 
a model is not perfect, and recognizing that references may not mesh perfectly, 
a program can imagine other possibilities and these possibilities can be based 
on complex interrelations built between feasible strands.  Approximations do 
not need to be limited to weighted expressions, general vagueness or something 
like that.  From this point it is just a matter of devising a 'mathematical' - 
a programmed - system to discover actual feasibilities.  The Game of Life did 
not solve the contemporary problem of AI because it was biased to create a 
chain of progression and it wasted the memory of those results that did not 
immediately result in a payoff but may have fit into other developments.  And 
it did not explore the relative reduction space.  The reconciliation between 
the study of possible splices of previously seen chains of products and 
empirical feasibility may be an open
 ended process but it could be governed by a program.  It may be AI-complete 
but the sub tasks to run a search from imaginative feasibility to empirical 
feasibility can be governed by logic (even though it would be open ended 
AI-complete search.)  

I agree with what you are saying in the broader sense, but I do believe that 
the research problem could be governed by a logical system, although it would 
require a great many resources to search the Cantorian diagonal infinities 
space of possible arrangements of relative reductions.  Relative reduction 
means that in order to discover the nature of certain mathematical problems we 
may (usually) have to use reductionism to discover all of the salient features 
that would be necessary to create a mathematical algorithm to produce the range 
of desired outputs.  But the system of reductionist methods has to be relative 
to the features of the system; a set of elements cannot be taken for granted, 
you have to discover the pseudo-elements (or relative elements) of the system 
relative to the features of the problem.

Jim Bromer





- Original Message 
From: Richard Loosemore <[EMAIL PROTECTED]>
To: agi@v2.listbox.com
Sent: Tuesday, June 24, 2008 9:02:31 PM
Subject: Re: [agi] Approximations of Knowledge

Abram Demski wrote:
>>> I'm still not really satisfied, though, because I would personally
>>> stop at the stage when the heuristic started to get messy, and say,
>>> "The problem is starting to become AI-complete, so at this point I
>>> should include a meta-level search to find a good heuristic for me,
>>> rather than trying to hard-code one..."
>> And at that point, your lab and my lab are essentially starting to do
>> the same thing.  You need to start searching the space of possible
>> heuristics in a systematic way, rather than just pick a hunch and go
>> with it.
>>
>> The problem, though, is that you might already have gotten yourself into
>> a You Can't Get There By Starting From Here situation.  Suppose your
>> choice of basic logical formalism, and knowledge representation format
>> (and the knowledge acquisition methods that MUST come along with that
>> formalism) has boxed you into a corner in which there does not exist any
>> choice of heuristic control mechanism that will get your system up into
>> human-level intelligence territory?
> 
> If the underlying search space was sufficiently general, we are OK,
> there is no way to get boxed in except by the heuristic.

Wait:  we are not talking about the same thing here.

Analogous situation.  Imagine that John Horton Conway is trying to 
invent a cellular automaton with particular characteristics - say, he 
has already decided that the basic rules MUST show the global 
characteristic of having a thing like a glider and a thing like a glider 
gun.  (This is equivalent to us saying that we want to build a system 
that has the particular characteristics that we colloquially call 
'intelligence', and we will 

Re: [agi] Approximations of Knowledge

2008-06-25 Thread Richard Loosemore

Jim Bromer wrote:

Loosemore said: "But now ... suppose, ... that there do not exist ANY
3-sex cellular automata in which there are emergent patterns
equivalent to the glider and glider gun.  ...Conway ... can search
through the entire space of 3-sex automata..., and he will never
build a  system that satisfies his requirement.

This is the boxed-in corner that I am talking about.  We decide that
 intelligence must be built with some choice of logical formalism,
plus heuristics, and we assume that we can always keep jiggling the 
heuristics until the system as a whole shows a significant degree of

 intelligence.  But there is nothing in the world that says that this
is possible.

...mathematics cannot possibly tell you that this part of the space
does not contain any solutions.  That is the whole point of complex
systems, n'est pas?  No analysis will let you know what the global
properties are without doing a brute force exploration of
(simulations of) the system."



>

But we can invent a 'mathematics' or a program that can. By
understanding that a model is not perfect, and recognizing that
references may not mesh perfectly, a program can imagine other
possibilities and these possibilities can be based on complex
interrelations built between feasible strands. Approximations do not
need to be limited to weighted expressions, general vagueness or
something like that. From this point it is just a matter of devising a
'mathematical' - a programmed - system to discover actual feasibilities.
The Game of Life did not solve the contemporary problem of AI because it
was biased to create a chain of progression and it wasted the memory of
those results that did not immediately result in a payoff but may have
fit into other developments. And it did not explore the relative
reduction space. The reconciliation between the study of possible
splices of previously seen chains of products and empirical feasibility
may be an open ended process but it could be governed by a program. It
may be AI-complete but the sub tasks to run a search from imaginative
feasibility to empirical feasibility can be governed by logic (even
though it would be open ended AI-complete search.) 

>

I agree with what you are saying in the broader sense, but I do believe
that the research problem could be governed by a logical system,
although it would require a great many resources to search the Cantorian
diagonal infinities space of possible arrangements of relative
reductions. Relative reduction means that in order to discover the
nature of certain mathematical problems we may (usually) have to use
reductionism to discover all of the salient features that would be
necessary to create a mathematical algorithm to produce the range of
desired outputs. But the system of reductionist methods has to be
relative to the features of the system; a set of elements cannot be
taken for granted, you have to discover the pseudo-elements (or relative
elements) of the system relative to the features of the problem.


Jim,

I'm sorry:  I cannot make any sense of what you say here.

I don't think you are understanding the technicalities of the argument I 
am presenting, because your very first sentence... "But we can invent a 
'mathematics' or a program that can" is just completely false.  In a 
complex system it is not possible to used analytic mathematics to 
predict the global behavior of the system given only the rules that 
determine the local mechanisms.  That is the very definition of a 
complex system (note:  this is a "complex system" in the technical sense 
of that term, which does not mean a "complicated system" in ordinary 
language).




Richard Loosemore







---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-25 Thread Abram Demski
It seems as if we are beginning to talk past eachother. I think the
problem may be that we have different implicit conceptions of the sort
of AI being constructed. My implicit conception is that of an
optimization problem. The AI is given the challenge of formulating the
best response to its input that it can muster within real-world time
constraints. This in some sense always a search problem; it just might
be "all heuristic", so that it doesn't look much like a search. In
designing an AI, I am implicitly assuming that we have some exact
definition of intelligence, so that we know what we are looking for.
This makes the optimization problem well-defined: the search space is
that of all possible responses to the input, and the utility function
is our definition of intelligence. *Our* problem is to find (1)
efficient optimal search strategies, and where that fails, (2) good
heuristics.

I'll admit that the general Conway analogy applies, because we are
looking for heuristics with the property of giving good answers most
of the time, and the math is sufficiently complicated as to be
intractable in most cases. But your more recent variation, where
Conway goes amiss, does not seem to be analogous?

On Tue, Jun 24, 2008 at 9:02 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Abram Demski wrote:

 I'm still not really satisfied, though, because I would personally
 stop at the stage when the heuristic started to get messy, and say,
 "The problem is starting to become AI-complete, so at this point I
 should include a meta-level search to find a good heuristic for me,
 rather than trying to hard-code one..."
>>>
>>> And at that point, your lab and my lab are essentially starting to do
>>> the same thing.  You need to start searching the space of possible
>>> heuristics in a systematic way, rather than just pick a hunch and go
>>> with it.
>>>
>>> The problem, though, is that you might already have gotten yourself into
>>> a You Can't Get There By Starting From Here situation.  Suppose your
>>> choice of basic logical formalism, and knowledge representation format
>>> (and the knowledge acquisition methods that MUST come along with that
>>> formalism) has boxed you into a corner in which there does not exist any
>>> choice of heuristic control mechanism that will get your system up into
>>> human-level intelligence territory?
>>
>> If the underlying search space was sufficiently general, we are OK,
>> there is no way to get boxed in except by the heuristic.
>
> Wait:  we are not talking about the same thing here.
>
> Analogous situation.  Imagine that John Horton Conway is trying to invent a
> cellular automaton with particular characteristics - say, he has already
> decided that the basic rules MUST show the global characteristic of having a
> thing like a glider and a thing like a glider gun.  (This is equivalent to
> us saying that we want to build a system that has the particular
> characteristics that we colloquially call 'intelligence', and we will do it
> with a system that is complex).
>
> But now Conway boxes himself into a corner:  he decides, a priori, that the
> cellular automaton MUST have three sexes, instead of the two sexes that we
> are familiar with in Game of Life.  So three states for every cell.  But now
> (we will suppose, for the sake of the argument), it just happens to be the
> case that there do not exist ANY 3-sex cellular automata in which there are
> emergent patterns equivalent to the glider and glider gun.  Now, alas,
> Conway is up poop creek without an instrument of propulsion - he can search
> through the entire space of 3-sex automata until the end of the universe,
> and he will never build a system that satisfies his requirement.
>
> This is the boxed-in corner that I am talking about.  We decide that
> intelligence must be built with some choice of logical formalism, plus
> heuristics, and we assume that we can always keep jiggling the heuristics
> until the system as a whole shows a significant degree of intelligence.  But
> there is nothing in the world that says that this is possible.  We could be
> in exactly the same system as our hypothetical Conway, trying to find a
> solution in a part of the space of all possible systems in which there do
> not exist any solutions.
>
> The real killer is that, unlike the example you mention below, mathematics
> cannot possibly tell you that this part of the space does not contain any
> solutions.  That is the whole point of complex systems, n'est pas?  No
> analysis will let you know what the global properties are without doing a
> brute force exploration of (simulations of) the system.
>
>
> Richard Loosemore
>
>
>
>> This is what the mathematics is good for. An experiment, I think, will
>> not tell you this, since a formalism can cover almost everything but
>> not everything. For example, is a given notation for functions
>> Turing-complete, or merely primitive recursive? Primitive recursion is
>> amazingly e

RE: [agi] Approximations of Knowledge

2008-06-25 Thread Derek Zahn
Richard,
 
If I can make a guess at where Jim is coming from:
 
Clearly, "intelligent systems" CAN be produced.  Assuming we can define 
"intelligent system" well enough to recognize it, we can generate systems at 
random until one is found.  That is impractical, however.  So, we can look at 
the problem as one of search optimization.  Evolution produced intelligent 
systems through a biased search, for example, so it is at least possible to 
improve search over completely random generate and test.
 
What other ways can be used to speed up search?  Jim is suggesting some methods 
that he believes may help.  If I understand what you've said about your 
approach, you have some very different methods than what he is proposing to 
focus the search.  I do not understand exactly what Jim is proposing; 
presumably he is aiming to use his SAT solver to guide the search toward areas 
that contain partial solutions or promising partial models of some sort.
 
It seems to me very difficult to define the goal formally, very difficult to 
develop a meta system in which a sufficiently broad class of candidate systems 
can be expressed, and very difficult to describe the "splices" or "reductions" 
or partial models in such a way to smooth the fitness landscape and thus speed 
up search.  So I don't know how practical such a plan is.
 
But (again assuming I understand Jim's approach) it avoids your complex system 
arguments because it is not making any effort to predict global behavior from 
the low-level system components, it's just searching through possibilities. 
 


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-25 Thread Abram Demski
On Sun, Jun 22, 2008 at 10:12 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> I find the absence of such models troubling. One problem is that there are no 
> provably hard problems. Problems like tic-tac-toe and chess are known to be 
> easy, in the sense that they can be fully analyzed with sufficient computing 
> power. (Perfect chess is O(1) using a giant lookup table). At that point, the 
> next generation would have to switch to a harder problem that was not 
> considered in the original design. Thus, the design is not friendly.

Would the halting problem qualify?


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-25 Thread Steve Richfield
Jim,

On 6/24/08, Jim Bromer <[EMAIL PROTECTED]> wrote:
>
>  Although I do suffer from an assortment of biases, I would not get
> worried to see any black man walking behind me at night.  For example, if I
> saw Andrew Young or Bill Cosby walking behind me I don't think I would be
> too worried.
>

However, you would have to look very carefully to identify these people with
confidence. Why would you bother to look so carefully? Obviously, because of
some sense of alarm.

  Or, if I was walking out of a campus library and a young black man
> carrying some books was walking behind me,
>

Again, you would have to look carefully enough to verify age, and that the
books weren't bound together with a belt or rope so they could be used as a
weapon. Again, why would you bother to look so carefully? Obviously again,
because of some sense of alarm.

  I would not be too worried about that either.
>

OK, so you have eliminated ~1% of the cases. How about the other 99% of the
cases?

  Your statement was way over the line, and it showed some really bad
> judgment.
>

Apparently you don't follow the news very well. My statement was
an obvious paraphrase from a fairly recent statement made by Rev Jesse
Jackson, who says that HE gets worried when a black man is walking behind
him. Perhaps I should have attributed my statement for those who
don't follow the news. I think that if he gets worried, that the rest of us
should also pay some attention.

However, your comment broadly dismissing what I said (reason for possible
alarm) based on some narrow possible exceptions (which would only be
carefully verified *BECAUSE* of such alarm) does indeed show that your
thinking is quite clouded and wound around the axle of PC (Political
Correctness), and hence we shouldn't be expecting any new ideas from you
anytime soon.

The message here that you will probably still completely miss, but which
hopefully other readers here will "get", is that even bright people like you
are UNABLE to program AGIs, or to state non-dangerous goals, or even to
recognize obvious dangers. The whole concept of human guidance is SO deeply
flawed that I see no hope of it ever working in any useful way. Not in this
century or the next.

Again, for the umpteenth time, and ANYONE here bothered yet to read the REST
of the Colossus trilogy that started with *The Forbin Project* movie? If we
are going to rehash issues that have already been written about, it would
sure be nice to fast-forward over past writings.

Steve Richfield
=

>   - Original Message 
> From: Steve Richfield <[EMAIL PROTECTED]>
> To: agi@v2.listbox.com
> Sent: Monday, June 23, 2008 10:53:07 PM
> Subject: Re: [agi] Approximations of Knowledge
>
> Andy,
>
> This is a PERFECT post, because it so perfectly illustrates a particular
> point of detachment from reality that is common among AGIers. In the real
> world we do certain things to achieve a good result, but when we design
> politically correct AGIs, we banish the very logic that allows us to
> function. For example, if you see a black man walking behind you at night,
> you rightly worry, but if you include that in your AGI design, you would be
> dismissed as a racist.
>
> Effectively solving VERY VERY difficult problems, like why a particular
> corporation is failing after other experts have failed, is a multiple-step
> process that starts with narrowing down the vast field of possibilities. As
> others have already pointed out here, this is often done in a rather summary
> and non-probabilistic way. Perhaps all of the really successful programmers
> that you have known have had long hair, so if the programming is failing and
> the programmer has short hair, then maybe there is an attitude issue to look
> into. Of course this does NOT necessarily mean that there is any linkage at
> all - just another of many points to focus some attention to.
>
> Similarly, over the course of >100 projects I have developed a long list of
> "rules" that help me find the problems with a tractable amount of effort.
> No, I don't usually tell others my poorly-formed rules because they prove
> absolutely NOTHING, only focus further effort. I have a special assortment
> of rules to apply whenever God is mentioned. After all, not everyone thinks
> that God has the same motivations, so SOME approach is needed to "paradigm
> shift" one person's statements to be able to be understood by another
> person. The posting you responded to was expressing one such rule. That
> having been said...
>
> On 6/22/08, J. Andrew Rogers <[EMAIL PROTECTED]> wrote:
>>
>>
>> Somewhere in the world, there is a PhD chemist and a born-again Christian
>> on another mailing list "...the project had hit a serious snag, and so the
>> investors brought in a consultant that would explain why the project was
>> broken by defectively reasoning about dubious generalizations he pulled out
>> of his ass..."
>
>
> Of course I don't make any such (I freely admit to dubious) gene

Re: [agi] As we inch closer to The Singularity...

2008-06-25 Thread Steve Richfield
Brad, et al,

On 6/24/08, Brad Paulsen <[EMAIL PROTECTED]> wrote:
>
> Hey Gang...
>
> RESEARCHERS DEVELOP NEURAL IMPLANT THAT LEARNS WITH THE BRAIN
> http://www.physorg.com/news133535377.html
>
> I wonder what *that* software looks like!


Does anyone here live near the University of Florida? If so, you might drop
in and request a copy of the software and post it here.

Steve Richfield



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-25 Thread Richard Loosemore

Abram Demski wrote:

It seems as if we are beginning to talk past eachother. I think the
problem may be that we have different implicit conceptions of the sort
of AI being constructed. My implicit conception is that of an
optimization problem. The AI is given the challenge of formulating the
best response to its input that it can muster within real-world time
constraints. This in some sense always a search problem; it just might
be "all heuristic", so that it doesn't look much like a search. In
designing an AI, I am implicitly assuming that we have some exact
definition of intelligence, so that we know what we are looking for.
This makes the optimization problem well-defined: the search space is
that of all possible responses to the input, and the utility function
is our definition of intelligence. *Our* problem is to find (1)
efficient optimal search strategies, and where that fails, (2) good
heuristics.

I'll admit that the general Conway analogy applies, because we are
looking for heuristics with the property of giving good answers most
of the time, and the math is sufficiently complicated as to be
intractable in most cases. But your more recent variation, where
Conway goes amiss, does not seem to be analogous?


The confusion in our discussion has to do with the assumption you listed 
above:  "...I am implicitly assuming that we have some exact definition 
of intelligence, so that we know what we are looking for..."


This is precisely what we do not have, and which we will quite possibly 
never have.


The reason?  If existing intelligent systems are complex systems, then 
when we look at one of them and say "That is my example of what is meant 
by 'intelligence'", we are pointing at a global property of a complex 
system.  If anyone thinks that the intelligence of existing intelligent 
systems is completely independent of all complex global properties of 
the system, the ball is in their court:  they must somehow show good 
reason for us to believe that this is the case - and so far in the 
history of philosophy, psychology and AI, nobody has ever come close to 
showing such a thing.  In other words, nobody can give a non-circular, 
practical definition that is demonstrably identical to the definition of 
intelligence in natural systems.  All the evidence (the tangled nature 
of the mechanisms that appear to be necessary to build an intelligence) 
points to the fact that intelligence is likely to be a complex global 
property.


Now, if intelligence *is* a global property of a complex system, it will 
not be possible to simply write down a clear definition of it and then 
optimize.  That is the point of the Conway analogy:  we would be in the 
same boat that he was.


So, in a way, when you wrote down that assumption, what you did was 
iimplictly assert that human level intelligence can definitely be 
achieved without needing to do it with a system that is complex.  That 
is an extremely strong assertion, and unfortunately there is no evidence 
(except the intuition of some people) that this is a valid assumption. 
Quite the contrary, all the evidence appears to point the other way.


So that one statement is really the crunch point.  All the rest is 
downhill from that point on.



Richard Loosemore





On Tue, Jun 24, 2008 at 9:02 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Abram Demski wrote:

I'm still not really satisfied, though, because I would personally
stop at the stage when the heuristic started to get messy, and say,
"The problem is starting to become AI-complete, so at this point I
should include a meta-level search to find a good heuristic for me,
rather than trying to hard-code one..."

And at that point, your lab and my lab are essentially starting to do
the same thing.  You need to start searching the space of possible
heuristics in a systematic way, rather than just pick a hunch and go
with it.

The problem, though, is that you might already have gotten yourself into
a You Can't Get There By Starting From Here situation.  Suppose your
choice of basic logical formalism, and knowledge representation format
(and the knowledge acquisition methods that MUST come along with that
formalism) has boxed you into a corner in which there does not exist any
choice of heuristic control mechanism that will get your system up into
human-level intelligence territory?

If the underlying search space was sufficiently general, we are OK,
there is no way to get boxed in except by the heuristic.

Wait:  we are not talking about the same thing here.

Analogous situation.  Imagine that John Horton Conway is trying to invent a
cellular automaton with particular characteristics - say, he has already
decided that the basic rules MUST show the global characteristic of having a
thing like a glider and a thing like a glider gun.  (This is equivalent to
us saying that we want to build a system that has the particular
characteristics that we colloquially call 'intelligence', and we will do it
with a system

Re: [agi] Equivalent of the bulletin for atomic scientists or CRN for AI?

2008-06-25 Thread Matt Mahoney
--- On Wed, 6/25/08, Abram Demski <[EMAIL PROTECTED]> wrote:

> On Sun, Jun 22, 2008 at 10:12 PM, Matt Mahoney
> <[EMAIL PROTECTED]> wrote:
> 
> > I find the absence of such models troubling. One
> problem is that there are no provably hard problems.
> Problems like tic-tac-toe and chess are known to be easy,
> in the sense that they can be fully analyzed with
> sufficient computing power. (Perfect chess is O(1) using a
> giant lookup table). At that point, the next generation
> would have to switch to a harder problem that was not
> considered in the original design. Thus, the design is not
> friendly.
> 
> Would the halting problem qualify?

No, many programs can be easily proven to halt or not halt. The parent has to 
choose from the small subset of problems that are hard to solve, and we don't 
know how to provably do that. As each generation makes advances, the set of 
hard problems get smaller.

Cryptographers have a great interest in finding problems that are hard to 
solve, but the best we can do to test any cryptosystem is to let lots of people 
try to break it, and if nobody succeeds for a long time, pronounce it secure. 
But breaks still happen.

It seems to be a general problem. Knowing that a problem is hard requires as 
much intelligence as solving the problems.

-- Matt Mahoney, [EMAIL PROTECTED]



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-06-25 Thread Richard Loosemore

Derek Zahn wrote:

Richard,

If I can make a guess at where Jim is coming from:

Clearly, "intelligent systems" CAN be produced.  Assuming we can
define "intelligent system" well enough to recognize it, we can
generate systems at random until one is found.  That is impractical,
however.  So, we can look at the problem as one of search
optimization.  Evolution produced intelligent systems through a
biased search, for example, so it is at least possible to improve
search over completely random generate and test.

What other ways can be used to speed up search?  Jim is suggesting
some methods that he believes may help.  If I understand what you've
said about your approach, you have some very different methods than
what he is proposing to focus the search.  I do not understand
exactly what Jim is proposing; presumably he is aiming to use his SAT
solver to guide the search toward areas that contain partial
solutions or promising partial models of some sort.

It seems to me very difficult to define the goal formally, very
difficult to develop a meta system in which a sufficiently broad
class of candidate systems can be expressed, and very difficult to
describe the "splices" or "reductions" or partial models in such a
way to smooth the fitness landscape and thus speed up search.  So I
don't know how practical such a plan is.

But (again assuming I understand Jim's approach) it avoids your
complex system arguments because it is not making any effort to
predict global behavior from the low-level system components, it's
just searching through possibilities.


I hear what you say here, but the crucial issue is defining this thing 
called intelligence.  And, in the end, that is where the complex systems 
argument makes itself felt (so this is not really avoiding the complex 
systems problem, but just hiding it).


Let me explain these thoughts.  If we really could only "define 
'intelligent system' well enough to recognize it" then the generate and 
test you are talking about would be extremely blind ... we would not 
make any specific design decisions, but generate completely random 
systems and say "Is this one intelligent?" each time we built one.


Clearly, that would be ridiculously slow (as you point out).  Even the 
evolutionary biassed search - in which you build simple systems and 
gradually elaborate them as you test them in combat - would still take a 
few billion years and a planet-sized computer.


But then you introduce the idea of speeding up the search in some way. 
Ahhh... now there's the rub.  To make the search more efficient, you 
have to have some idea of an error function:  you look at the 
intelligence of the current best try, and you feed that into a function 
that suggests what kind of changes in the low-level mechanisms will give 
rise to a *beneficial* change in the overall intelligence (an 
improvement, i.e.).  To do any better than random, you really must have 
an error function this almost the very definition of doing a search 
that is not random, no?  You have to have some idea of how a change in 
design will cause a change in high level behavior, and that is the error 
function.


If the system you are talking about is not complex, then, no problem: 
an error function is findable, at least in principle.  But the very 
definition of a complex system is that such an error function cannot 
(absolutely cannot) be found.  You cannot say, "I need to improve the 
overall intelligence, *thus*, and THEREFORE I will make this change in 
the local mechanisms, because I have reason to believe that such a 
global change will be effected by this local change".  That is the one 
statement about a complex system that is verboten.


So it is that one quiet little statement about finding better ways to do 
the search that brings down the whole argument.  If intelligent systems 
can be built without making them complex, all well and good.  But if 
that is not possible (and the evidence indicates that it is not), then 
you must be very careful not to set up a research methodology in which 
you make the assumption that you are going to adjust the low level 
mechanisms in a way that will 'improve' the global performance in a 
desired way.  If anyone does include that implicit assumption in their 
methodology, they are unknowingly inserting a "And Then A Miracle 
Happens Here" step.


I shouold quickly add one comment about that last paragraph.  AI 
researchers clearly do do exactly what i have just said is impossible! 
They frequently look at the poor performance of an AI system and say "I 
think a change in this mechanism will improve things" ... and then, sure 
enough, they do get an improvement.  So does that mean my argument that 
there is a complex systems problem just wrong?  No:  I have clearly said 
(though many people have missed this point I think) that what AI 
researchers have been doing is implicitly using their understanding of 
human psychology (of their own minds, for the most part) to g

Re: [agi] Approximations of Knowledge

2008-06-25 Thread Jim Bromer



- Original Message 
From: Richard Loosemore Jim,

I'm sorry:  I cannot make any sense of what you say here.

I don't think you are understanding the technicalities of the argument I 
am presenting, because your very first sentence... "But we can invent a 
'mathematics' or a program that can" is just completely false.  In a 
complex system it is not possible to used analytic mathematics to 
predict the global behavior of the system given only the rules that 
determine the local mechanisms.  That is the very definition of a 
complex system (note:  this is a "complex system" in the technical sense 
of that term, which does not mean a "complicated system" in ordinary 
language).
Richard Loosemore
--
I don't feel that you are seriously interested in discussing the subject with 
me.  Let me know if you ever change your mind.
Jim Bromer









---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com