Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
2008/7/3 Terren Suydam [EMAIL PROTECTED]:

 --- On Wed, 7/2/08, William Pearson [EMAIL PROTECTED] wrote:
 Evolution! I'm not saying your way can't work, just
 saying why I short
 cut where I do. Note a thing has a purpose if it is useful
 to apply
 the design stance* to it. There are two things to
 differentiate
 between, having a purpose and having some feedback of a
 purpose built
 in to the system.

 I don't believe evolution has a purpose. See Hod Lipson's TED talk for an 
 intriguing experiment in which replication is an inevitable outcome for a 
 system of building blocks explicitly set up in a random fashion. In other 
 words, purpose is emergent and ultimately in the mind of the beholder.

 See this article for an interesting take that increasing complexity is a 
 property of our laws of thermodynamics for non-equilibrium systems:

 http://biology.plosjournals.org/perlserv/?request=get-documentdoi=10.1371/journal.pbio.0050142ct=1

 In other words, Darwinian evolution is a special case of a more basic kind of 
 selection based on the laws of physics. This would deprive evolution of any 
 notion of purpose.


Evolution doesn't have a purpose, it creates things with purpose.
Where purpose means it is useful to apply the design stance on it,
e.g. ask what an eye on a frog is for.

 It is the second I meant, I should have been more specific.
 That is to
 apply the intentional stance to something successfully, I
 think a
 sense of its own purpose is needed to be embedded in that
 entity (this
 may only be a very crude approximation to the purpose we
 might assign
 something looking from an evolution eye view).

 Specifying a system's goals is limiting in the sense that we don't force the 
 agent to construct its own goals based on it own constructions. In other 
 words, this is just a different way of creating an ontology. It narrows the 
 domain of applicability. That may be exactly what you want to do, but for AGI 
 researchers, it is a mistake.

Remember when I said that a purpose is not the same thing as a goal?
The purpose that the system might be said to have embedded is
attempting to maximise a certain signal. This purpose presupposes no
ontology. The fact that this signal is attached to a human means the
system as a whole might form the goal to try and please the human. Or
depending on what the human does it might develop other goals. Goals
are not the same as purposes. Goals require the intentional stance,
purposes the design.

 Also your way we will end up with entities that may not be
 useful to
 us, which I think of as a negative for a long costly
 research program.

  Will

 Usefulness, again, is in the eye of the beholder. What appears not useful 
 today may be absolutely critical to an evolved descendant. This is a popular 
 explanation for how diversity emerges in nature, that a virus or bacteria 
 does some kind of horizontal transfer of its genes into a host genome, and 
 that gene becomes the basis for a future adaptation.

 When William Burroughs said language is a virus, he may have been more 
 correct than he knew. :-]



Possibly, but it will be another huge research topic to actually talk
to the things that evolve in the artificial universe, as they will
share very little background knowledge or ontology with us. I wish you
luck and will be interested to see where you go but the alife route is
just to slow and resource intensive for my liking.

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 12:59 AM, William Pearson [EMAIL PROTECTED] wrote:
 2008/7/2 Vladimir Nesov [EMAIL PROTECTED]:
 On Wed, Jul 2, 2008 at 9:09 PM, William Pearson [EMAIL PROTECTED] wrote:
 They would get less credit from the human supervisor. Let me expand on
 what I meant about the economic competition. Let us say vmprogram A
 makes a copy of itself, called A', with some purposeful tweaks, trying
 to make itself more efficient.

 So, this process performs optimization, A has a goal that it tries to
 express in form of A'. What is the problem with the algorithm that A
 uses? If this algorithm is stupid (in a technical sense), A' is worse
 than A and we can detect that. But this means that in fact, A' doesn't
 do its job and all the search pressure comes from program B that ranks
 the performance of A or A'. This
 generate-blindly-or-even-stupidly-and-check is a very inefficient
 algorithm. If, on the other hand, A happens to be a good program, then
 A' has a good change of being better than A, and anyway A has some
 understanding of what 'better' means, then what is the role of B? B
 adds almost no additional pressure, almost everything is done by A.

 How do you distribute the optimization pressure between generating
 programs (A) and checking programs (B)? Why do you need to do that at
 all, what is the benefit of generating and checking separately,
 compared to reliably generating from the same point (A alone)? If
 generation is not reliable enough, it probably won't be useful as
 optimization pressure anyway.


 The point of A and A' is that A', if better, may one day completely
 replace A. What is very good? Is 1 in 100 chances of making a mistake
 when generating its successor very good? If you want A' to be able to
 replace A, that is only 100 generations before you have made a bad
 mistake, and then where do you go? You have a bugged program and
 nothing to act as a watchdog.

 Also if A' is better than time A at time t, there is no guarantee that
 it will stay that way. Changes in the environment might favour one
 optimisation over another. If they both do things well, but different
 things then both A and A' might survive in different niches.


 I suggest you read ( http://sl4.org/wiki/KnowabilityOfFAI )
 If your program is a faulty optimizer that can't pump the reliability
 out of its optimization, you are doomed. I assume you argue that you
 don't want to include B in A, because a descendant of A may start to
 fail unexpectedly.

Nope. I don't include B in A because if A' is faulty it can cause
problems to whatever is in the same vmprogram as it, by overwriting
memory locations. A' being a separate vmprogram means it is insulated
from the B and A, and can only have limited impact on them.

I don't get what your obsession is with having things all be in one
program is anyway. Why is that better? I'll read knowability of FAI
again, but I have read it before and I don't think it will enlighten
me. I'll come back to the rest of your email once I have done that.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread Vladimir Nesov
On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:

 Nope. I don't include B in A because if A' is faulty it can cause
 problems to whatever is in the same vmprogram as it, by overwriting
 memory locations. A' being a separate vmprogram means it is insulated
 from the B and A, and can only have limited impact on them.

Why does it need to be THIS faulty? If there is a known method to
prevent such faultiness, it can be reliably implemented in A, so that
all its descendants keep it, unless they are fairly sure it's not
needed anymore or there is a better alternative.

 I don't get what your obsession is with having things all be in one
 program is anyway. Why is that better? I'll read knowability of FAI
 again, but I have read it before and I don't think it will enlighten
 me. I'll come back to the rest of your email once I have done that.

It's not necessarily better, but I'm trying to make explicit in what
sense is it worse, that is what is the contribution of your framework
to the overall problem, if virtually the same thing can be done
without it.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:

 Nope. I don't include B in A because if A' is faulty it can cause
 problems to whatever is in the same vmprogram as it, by overwriting
 memory locations. A' being a separate vmprogram means it is insulated
 from the B and A, and can only have limited impact on them.

 Why does it need to be THIS faulty? If there is a known method to
 prevent such faultiness, it can be reliably implemented in A, so that
 all its descendants keep it, unless they are fairly sure it's not
 needed anymore or there is a better alternative.

Because it is dealing with powerful stuff, when it gets it wrong it
goes wrong powerfully. You could lock the experimental code away in a
sand box inside A, but then it would be a separate program just one
inside A, but it might not be able to interact with programs in a way
that it can do its job.

There are two grades of faultiness. frequency and severity. You cannot
predict the severity of faults of arbitrary programs (and accepting
arbitrary programs from the outside world is something I want the
system to be able to do, after vetting etc).


 I don't get what your obsession is with having things all be in one
 program is anyway. Why is that better? I'll read knowability of FAI
 again, but I have read it before and I don't think it will enlighten
 me. I'll come back to the rest of your email once I have done that.

 It's not necessarily better, but I'm trying to make explicit in what
 sense is it worse, that is what is the contribution of your framework
 to the overall problem, if virtually the same thing can be done
 without it.


I'm not sure why you see this distinction as being important though. I
call the vmprograms separate because they have some protection around
them, but you could see them as all one big program if you wanted. The
instructions don't care whether we call the whole set of operations a
program or not. This, from one point of view, is true at least while
it is being simulated the whole VM is one program inside a larger
system.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread Vladimir Nesov
On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
 2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 10:45 AM, William Pearson [EMAIL PROTECTED] wrote:

 Nope. I don't include B in A because if A' is faulty it can cause
 problems to whatever is in the same vmprogram as it, by overwriting
 memory locations. A' being a separate vmprogram means it is insulated
 from the B and A, and can only have limited impact on them.

 Why does it need to be THIS faulty? If there is a known method to
 prevent such faultiness, it can be reliably implemented in A, so that
 all its descendants keep it, unless they are fairly sure it's not
 needed anymore or there is a better alternative.

 Because it is dealing with powerful stuff, when it gets it wrong it
 goes wrong powerfully. You could lock the experimental code away in a
 sand box inside A, but then it would be a separate program just one
 inside A, but it might not be able to interact with programs in a way
 that it can do its job.

 There are two grades of faultiness. frequency and severity. You cannot
 predict the severity of faults of arbitrary programs (and accepting
 arbitrary programs from the outside world is something I want the
 system to be able to do, after vetting etc).


You can't prove any interesting thing about an arbitrary program. It
can behave like a Friendly AI before February 25, 2317, and like a
Giant Cheesecake AI after that.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread William Pearson
Sorry about the long thread jack

2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
 Because it is dealing with powerful stuff, when it gets it wrong it
 goes wrong powerfully. You could lock the experimental code away in a
 sand box inside A, but then it would be a separate program just one
 inside A, but it might not be able to interact with programs in a way
 that it can do its job.

 There are two grades of faultiness. frequency and severity. You cannot
 predict the severity of faults of arbitrary programs (and accepting
 arbitrary programs from the outside world is something I want the
 system to be able to do, after vetting etc).


 You can't prove any interesting thing about an arbitrary program. It
 can behave like a Friendly AI before February 25, 2317, and like a
 Giant Cheesecake AI after that.

Whoever said you could? The whole system is designed around the
ability to take in or create arbitrary code, give it only minimal
access to other programs that it can earn and lock it out from that
ability when it does something bad.

By arbitrary code I don't mean random, I mean stuff that has not
formally been proven to have the properties you want. Formal proof is
too high a burden to place on things that you want to win. You might
not have the right axioms to prove the changes you want are right.

Instead you can see the internals of the system as a form of
continuous experiments. B is always testing a property of A or  A', if
at any time it stops having the property that B looks for then B flags
it as buggy.

I know this doesn't have the properties you would look for in a
friendly AI set to dominate the world. But I think it is similar to
the way humans work, and will be as chaotic and hard to grok as our
neural structure. So as likely as humans are to explode intelligently.

  Will Pearson


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-03 Thread Richard Loosemore

Ed Porter wrote:

WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

 

Here is an important practical, conceptual problem I am having trouble 
with.


 

In an article entitled “Are Cortical Models Really Bound by the ‘Binding 
Problem’? ” Tomaso Poggio’s group at MIT takes the position that there 
is no need for special mechanisms to deal with the famous “binding 
problem” --- at least in certain contexts, such as 150 msec feed forward 
visual object recognition.  This article implies that a properly 
designed hierarchy of patterns that has both compositional and 
max-pooling layers (I call them “gen/comp hierarchies”) automatically 
handles the problem of what sub-elements are connected with which 
others, preventing the need for techniques like synchrony to handle this 
problem.


 

Poggio’s group has achieved impressive results without the need for 
special mechanisms to deal with binding in this type of visual 
recognition, as is indicated by the two papers below by Serre (the later 
of which summarizes much of what is in the first, which is an excellent, 
detailed PhD thesis.) 

 

The two  works by Geoffrey Hinton cited below are descriptions of 
Hinton’s hierarchical feed-forward neural net recognition system (which, 
when run backwards, generates patterns similar to those it has been 
trained on).  These two works by Hinton show impressive results in 
handwritten digit recognition without any explicit mechanism for 
binding.  In particular, watch the portion of the Hinton YouTube video 
starting at 21:35 - 26:39 where Hinton shows his system alternating 
between recognizing a pattern and then generating a similar pattern 
stochastically from the higher level activations that have resulted from 
the previous recognition.  See how amazingly well his system seems to 
capture the many varied forms in which the various parts and sub-shapes 
of numerical handwritten digits are related.


 

So my question is this: HOW BROADLY DOES THE IMPLICATION THAT THE 
BINDING PROBLEM CAN BE AUTOMATICALLY HANDLED BY A GEN/COMP HIERARCHY OR 
A HINTON-LIKE HIERARCHY APPLY TO THE MANY TYPES OF PROBLEMS A BRAIN 
LEVEL ARTIFICIAL GENERAL INTELLIGENCE WOULD BE EXPECTED TO HANDLE?  In 
particular HOW APPLICABLE IS IT TO SEMANTIC PATTERN RECOGNITION AND 
GENERATION --- WITH ITS COMPLEX AND HIGHLY VARIED RELATIONS --- SUCH AS 
IS COMMONLY INVOLVED IN HUMAN LEVEL NATURAL LANGUAGE UNDERSTANDING AND 
GENERATION?


The answer lies in the confusion over what the binding problem 
actually is.  There are many studies out there that misunderstand the 
problem is such a substantial way that their conclusions are 
meaningless.  I refer, for example, to the seminal paper by Shastri and 
Ajjangadde, which I remember discussing with a colleague (Janet Vousden) 
back in the early 90s.  We both went into that paper in great depth, an 
independently came to the conclusion that S  A had their causality so 
completely screwed up that the paper said nothing at all:  they claimed 
to be able to explain binding by showing that synhcronized firing could 
make it happen, but they completely failed to show how the RELEVANT 
neurons would become synchronized.


Distressingly, the Shastri and Ajjangadde paper then went on to become, 
as I say, seminal, and there has been a lot of research on something 
that these people call the binding problem, but which seems (from my 
limited coverage of that area) to be about getting various things to 
connect using synchronized signals, but without any explanation of how 
the the things that are semantically required to connect, actual connect.


So, to be able to answer your question, you have to be able to 
disentangle that entire mess and become clear what is the real binding 
problem, what is the fake binding problem, and whether the new idea 
makes any difference to one or other of these.


In my opinion, it sounds like Poggio is correct in making the claim that 
he does, but that Janet Vousden and I already understood that general 
point back in 1994, just by using general principles.  And, most 
probably, the solution Poggio refers to DOES apply as well to what you 
are calling the semantic level.


 

The paper “Are Cortical Models Really Bound by the ‘Binding Problem’?”, 
suggests in the first full paragraph on its second page that gen/comp 
hierarchies avoids the “binding problem” by


 

“coding an object through a set of intermediate features made up of 
local arrangements of simpler features [that] sufficiently constrain the 
representation to uniquely code complex objects without retaining global 
positional information.


This is exactly the position that I took a couple of decades ago.  You 
will recall that I am always talking about doing this with CONSTRAINTS, 
and using those constraints at many different levels of the hierarchy.


 


For example, in the context of speech recognition,

 

...rather than using individual letters to code words, letter pairs or 

Re: [agi] WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?

2008-07-03 Thread Abram Demski
In general I agree with Richard Loosemore's reply.

Also, I think that it is not surprising that the approaches referred
to (gen/comp hierarchies, Hinton's hierarchies, hierarchical-temporal
memory, and many similar approaches) become too large if we try to use
them for more than the first few levels of perception. The reason is
not because recursive composition becomes insufficient, but rather
because these systems do not take full advantage of it: they typically
cannot model arbitrary context-free patterns, much less
context-sensitive and beyond. Their computational power is low, so to
compensate the model size becomes large. (It's like trying to
approximate a Turing machine with a finite-state machine: more and
more states are needed, and although the approximation gets better, it
is never enough.)

On Thu, Jul 3, 2008 at 1:41 PM, Richard Loosemore [EMAIL PROTECTED] wrote:
 Ed Porter wrote:

 WHAT PORTION OF CORTICAL PROCESSES ARE BOUND BY THE BINDING PROBLEM?


 Here is an important practical, conceptual problem I am having trouble
 with.


 In an article entitled Are Cortical Models Really Bound by the 'Binding
 Problem'?  Tomaso Poggio's group at MIT takes the position that there is no
 need for special mechanisms to deal with the famous binding problem --- at
 least in certain contexts, such as 150 msec feed forward visual object
 recognition.  This article implies that a properly designed hierarchy of
 patterns that has both compositional and max-pooling layers (I call them
 gen/comp hierarchies) automatically handles the problem of what
 sub-elements are connected with which others, preventing the need for
 techniques like synchrony to handle this problem.


 Poggio's group has achieved impressive results without the need for
 special mechanisms to deal with binding in this type of visual recognition,
 as is indicated by the two papers below by Serre (the later of which
 summarizes much of what is in the first, which is an excellent, detailed PhD
 thesis.)

 The two  works by Geoffrey Hinton cited below are descriptions of Hinton's
 hierarchical feed-forward neural net recognition system (which, when run
 backwards, generates patterns similar to those it has been trained on).
  These two works by Hinton show impressive results in handwritten digit
 recognition without any explicit mechanism for binding.  In particular,
 watch the portion of the Hinton YouTube video starting at 21:35 - 26:39
 where Hinton shows his system alternating between recognizing a pattern and
 then generating a similar pattern stochastically from the higher level
 activations that have resulted from the previous recognition.  See how
 amazingly well his system seems to capture the many varied forms in which
 the various parts and sub-shapes of numerical handwritten digits are
 related.


 So my question is this: HOW BROADLY DOES THE IMPLICATION THAT THE BINDING
 PROBLEM CAN BE AUTOMATICALLY HANDLED BY A GEN/COMP HIERARCHY OR A
 HINTON-LIKE HIERARCHY APPLY TO THE MANY TYPES OF PROBLEMS A BRAIN LEVEL
 ARTIFICIAL GENERAL INTELLIGENCE WOULD BE EXPECTED TO HANDLE?  In particular
 HOW APPLICABLE IS IT TO SEMANTIC PATTERN RECOGNITION AND GENERATION --- WITH
 ITS COMPLEX AND HIGHLY VARIED RELATIONS --- SUCH AS IS COMMONLY INVOLVED IN
 HUMAN LEVEL NATURAL LANGUAGE UNDERSTANDING AND GENERATION?

 The answer lies in the confusion over what the binding problem actually
 is.  There are many studies out there that misunderstand the problem is such
 a substantial way that their conclusions are meaningless.  I refer, for
 example, to the seminal paper by Shastri and Ajjangadde, which I remember
 discussing with a colleague (Janet Vousden) back in the early 90s.  We both
 went into that paper in great depth, an independently came to the conclusion
 that S  A had their causality so completely screwed up that the paper said
 nothing at all:  they claimed to be able to explain binding by showing that
 synhcronized firing could make it happen, but they completely failed to show
 how the RELEVANT neurons would become synchronized.

 Distressingly, the Shastri and Ajjangadde paper then went on to become, as I
 say, seminal, and there has been a lot of research on something that these
 people call the binding problem, but which seems (from my limited coverage
 of that area) to be about getting various things to connect using
 synchronized signals, but without any explanation of how the the things that
 are semantically required to connect, actual connect.

 So, to be able to answer your question, you have to be able to disentangle
 that entire mess and become clear what is the real binding problem, what is
 the fake binding problem, and whether the new idea makes any difference to
 one or other of these.

 In my opinion, it sounds like Poggio is correct in making the claim that he
 does, but that Janet Vousden and I already understood that general point
 back in 1994, just by using general principles.  And, most probably, the
 solution Poggio 

[agi] Re: Theoretic estimation of reliability vs experimental

2008-07-03 Thread Vladimir Nesov
On Thu, Jul 3, 2008 at 9:36 PM, William Pearson [EMAIL PROTECTED] wrote:
 Sorry about the long thread jack

 2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
 Because it is dealing with powerful stuff, when it gets it wrong it
 goes wrong powerfully. You could lock the experimental code away in a
 sand box inside A, but then it would be a separate program just one
 inside A, but it might not be able to interact with programs in a way
 that it can do its job.

 There are two grades of faultiness. frequency and severity. You cannot
 predict the severity of faults of arbitrary programs (and accepting
 arbitrary programs from the outside world is something I want the
 system to be able to do, after vetting etc).


 You can't prove any interesting thing about an arbitrary program. It
 can behave like a Friendly AI before February 25, 2317, and like a
 Giant Cheesecake AI after that.

 Whoever said you could? The whole system is designed around the
 ability to take in or create arbitrary code, give it only minimal
 access to other programs that it can earn and lock it out from that
 ability when it does something bad.

 By arbitrary code I don't mean random, I mean stuff that has not
 formally been proven to have the properties you want. Formal proof is
 too high a burden to place on things that you want to win. You might
 not have the right axioms to prove the changes you want are right.

 Instead you can see the internals of the system as a form of
 continuous experiments. B is always testing a property of A or  A', if
 at any time it stops having the property that B looks for then B flags
 it as buggy.

The point isn't particularly about formal proof, but more about any
theoretic estimation of reliability and optimality. If you produce an
artifact A' and theoretically estimate that probability of it working
correctly is such that you don't expect it to fail in 10^9 years, you
can't beat this reliability with a result of experimental testing.
Thus, if theoretic estimation is possible (and it's much more feasible
for purposefully designed A' than for arbitrary A'), experimental
testing has vanishingly small relevance.


 I know this doesn't have the properties you would look for in a
 friendly AI set to dominate the world. But I think it is similar to
 the way humans work, and will be as chaotic and hard to grok as our
 neural structure. So as likely as humans are to explode intelligently.


Yes, one can argue that AGI of minimal reliability is sufficient to
jump-start singularity (it's my current position anyway, Oracle AI),
but the problem with faulty design is not only that it's not going to
be Friendly, but that it isn't going to work at all.

-- 
Vladimir Nesov
[EMAIL PROTECTED]
http://causalityrelay.wordpress.com/


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Theoretic estimation of reliability vs experimental

2008-07-03 Thread William Pearson
2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 9:36 PM, William Pearson [EMAIL PROTECTED] wrote:
 Sorry about the long thread jack

 2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
 On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED] wrote:
 Because it is dealing with powerful stuff, when it gets it wrong it
 goes wrong powerfully. You could lock the experimental code away in a
 sand box inside A, but then it would be a separate program just one
 inside A, but it might not be able to interact with programs in a way
 that it can do its job.

 There are two grades of faultiness. frequency and severity. You cannot
 predict the severity of faults of arbitrary programs (and accepting
 arbitrary programs from the outside world is something I want the
 system to be able to do, after vetting etc).


 You can't prove any interesting thing about an arbitrary program. It
 can behave like a Friendly AI before February 25, 2317, and like a
 Giant Cheesecake AI after that.

 Whoever said you could? The whole system is designed around the
 ability to take in or create arbitrary code, give it only minimal
 access to other programs that it can earn and lock it out from that
 ability when it does something bad.

 By arbitrary code I don't mean random, I mean stuff that has not
 formally been proven to have the properties you want. Formal proof is
 too high a burden to place on things that you want to win. You might
 not have the right axioms to prove the changes you want are right.

 Instead you can see the internals of the system as a form of
 continuous experiments. B is always testing a property of A or  A', if
 at any time it stops having the property that B looks for then B flags
 it as buggy.

 The point isn't particularly about formal proof, but more about any
 theoretic estimation of reliability and optimality. If you produce an
 artifact A' and theoretically estimate that probability of it working
 correctly is such that you don't expect it to fail in 10^9 years, you
 can't beat this reliability with a result of experimental testing.
 Thus, if theoretic estimation is possible (and it's much more feasible
 for purposefully designed A' than for arbitrary A'), experimental
 testing has vanishingly small relevance.

This, I think, is a wild goose chase, hence why I am not following it.
Why won't the estimation system will run out of steam, like Lenats
Automated Mathematician?


 I know this doesn't have the properties you would look for in a
 friendly AI set to dominate the world. But I think it is similar to
 the way humans work, and will be as chaotic and hard to grok as our
 neural structure. So as likely as humans are to explode intelligently.


 Yes, one can argue that AGI of minimal reliability is sufficient to
 jump-start singularity (it's my current position anyway, Oracle AI),
 but the problem with faulty design is not only that it's not going to
 be Friendly, but that it isn't going to work at all.

By what principles do you think humans develop their intellects? I
don't seem to be made processes that probabilistically guarantee that
I will work better tomorrow than I did today. How do you explain
developing echolocation or specific areas specialised for reading
braille in blind people?

  Will


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Re: Theoretic estimation of reliability vs experimental

2008-07-03 Thread Charles Hixson
On Thursday 03 July 2008 11:14:15 am Vladimir Nesov wrote:
 On Thu, Jul 3, 2008 at 9:36 PM, William Pearson [EMAIL PROTECTED] 
wrote:...
  I know this doesn't have the properties you would look for in a
  friendly AI set to dominate the world. But I think it is similar to
  the way humans work, and will be as chaotic and hard to grok as our
  neural structure. So as likely as humans are to explode intelligently.

 Yes, one can argue that AGI of minimal reliability is sufficient to
 jump-start singularity (it's my current position anyway, Oracle AI),
 but the problem with faulty design is not only that it's not going to
 be Friendly, but that it isn't going to work at all.

The problem here is that proving a theory is often considerably more difficult 
than testing it.  Additionally there are a large number of conditions 
where almost optimal techniques can be found relatively easily, but where 
optimal techniques require an infinite number of steps to derive.  In such 
conditions generate and test is a better approach, but since you are 
searching a very large state-space you can't expect to get very close to 
optimal, unless there's a very large area where the surface is smooth enough 
for hill-climbing to work.

So what's needed are criteria for sufficiently friendly that are testable.  
Of course, we haven't yet generated the first entry for generate and test, 
but friendly, like optimal, may be too high a bar.  Sufficiently friendly 
might be a much easier goal...but to know that you've achieved it, you need 
to be able to test for it.



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: Formal proved code change vs experimental was Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread Steve Richfield
William and Vladimir,

IMHO this discussion is based entirely on the absence of any sort of
interface spec. Such a spec is absolutely necessary for a large AGI project
to ever succeed, and such a spec could (hopefully) be wrung out to at least
avoid the worst of the potential traps.

For example: Suppose that new tasks stated the maximum CPU resources needed
to complete. Then, exceeding that would be cause for abnormal termination.
Of course, this doesn't cover logical failure.

More advanced example: Suppose that tasks provided a chain of
consciousness log as they execute, and a monitor watches that chain of
consciousness to see that new entries are repeatedly made, that they are
grammatically (machine grammar) correct, and verifies anything that is
easily verifiable.

Even more advanced example: Suppose that a new pseudo-machine were proposed,
whose fundamental code consisted of reasonable operations in the
logic-domain being exploited by the AGI. The interpreter for this
pseudo-machine could then employ countless internal checks as it operated,
and quickly determine when things went wrong.

Does anyone out there have something, anything in the way of an interface
spec to really start this discussion?

Steve Richfield
===
On 7/3/08, William Pearson [EMAIL PROTECTED] wrote:

 Sorry about the long thread jack

 2008/7/3 Vladimir Nesov [EMAIL PROTECTED]:
  On Thu, Jul 3, 2008 at 4:05 PM, William Pearson [EMAIL PROTECTED]
 wrote:
  Because it is dealing with powerful stuff, when it gets it wrong it
  goes wrong powerfully. You could lock the experimental code away in a
  sand box inside A, but then it would be a separate program just one
  inside A, but it might not be able to interact with programs in a way
  that it can do its job.
 
  There are two grades of faultiness. frequency and severity. You cannot
  predict the severity of faults of arbitrary programs (and accepting
  arbitrary programs from the outside world is something I want the
  system to be able to do, after vetting etc).
 
 
  You can't prove any interesting thing about an arbitrary program. It
  can behave like a Friendly AI before February 25, 2317, and like a
  Giant Cheesecake AI after that.
 
 Whoever said you could? The whole system is designed around the
 ability to take in or create arbitrary code, give it only minimal
 access to other programs that it can earn and lock it out from that
 ability when it does something bad.

 By arbitrary code I don't mean random, I mean stuff that has not
 formally been proven to have the properties you want. Formal proof is
 too high a burden to place on things that you want to win. You might
 not have the right axioms to prove the changes you want are right.

 Instead you can see the internals of the system as a form of
 continuous experiments. B is always testing a property of A or  A', if
 at any time it stops having the property that B looks for then B flags
 it as buggy.

 I know this doesn't have the properties you would look for in a
 friendly AI set to dominate the world. But I think it is similar to
 the way humans work, and will be as chaotic and hard to grok as our
 neural structure. So as likely as humans are to explode intelligently.

 Will Pearson


 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-07-03 Thread Russell Wallace
On Wed, Jul 2, 2008 at 5:31 AM, Terren Suydam [EMAIL PROTECTED] wrote:

 Nevertheless, generalities among different instances of complex systems have 
 been identified, see for instance:

 http://en.wikipedia.org/wiki/Feigenbaum_constants

To be sure, but there are also plenty of complex systems where
Feigenbaum's constants don't arise. I'm not saying there aren't
theories that say things about more than one complex system - clearly
there are - only that there aren't any that say nontrivial things
about complex systems in general.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] Approximations of Knowledge

2008-07-03 Thread Terren Suydam

That may be true, but it misses the point I was making, which was a response to 
Richard's lament about the seeming lack of any generality from one complex 
system to the next. The fact that Feigenbaum's constants describe complex 
systems of different kinds is remarkable because it suggests an underlying 
order among systems that are described by different equations. It is not 
unreasonable to imagine that in the future we will develop a much more robust 
mathematics of complex systems.

--- On Thu, 7/3/08, Russell Wallace [EMAIL PROTECTED] wrote:
 [EMAIL PROTECTED] wrote:
 
  Nevertheless, generalities among different instances
 of complex systems have been identified, see for instance:
 
  http://en.wikipedia.org/wiki/Feigenbaum_constants
 
 To be sure, but there are also plenty of complex systems
 where
 Feigenbaum's constants don't arise. I'm not
 saying there aren't
 theories that say things about more than one complex system
 - clearly
 there are - only that there aren't any that say
 nontrivial things
 about complex systems in general.
 
 
 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com


Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-07-03 Thread Terren Suydam
Will,

 Remember when I said that a purpose is not the same thing
 as a goal?
 The purpose that the system might be said to have embedded
 is
 attempting to maximise a certain signal. This purpose
 presupposes no
 ontology. The fact that this signal is attached to a human
 means the
 system as a whole might form the goal to try and please the
 human. Or
 depending on what the human does it might develop other
 goals. Goals
 are not the same as purposes. Goals require the intentional
 stance,
 purposes the design.

To the extent that purpose is not related to goals, it is a meaningless term. 
In what possible sense is it worthwhile to talk about purpose if it doesn't 
somehow impact what an intelligent actually does?

 Possibly, but it will be another huge research topic to
 actually talk
 to the things that evolve in the artificial universe, as
 they will
 share very little background knowledge or ontology with us.
 I wish you
 luck and will be interested to see where you go but the
 alife route is
 just to slow and resource intensive for my liking.
 
   Will

That is probably the most common criticism of the path I advocate and I 
certainly understand that, it's not for everyone. I will be very interested in 
your results as well, good luck!

Terren


  


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com