Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Vladimir Nesov
On 10/5/07, Richard Loosemore [EMAIL PROTECTED] wrote:
 Vladimir Nesov wrote:
  Richard,
 
  It's a question of notation. Yes, you can sometimes formulate
  difficult problems succinctly. GoL is just another formalism in which
  it's possible. What does it have to do with anything?

 It has to do with the argument in my paper.

Strictly speaking, it doesn't answer that question.

 Can there ever be a scientific theory that
 predicts all the interesting creatures given only the rules?

Which is equivalent to asking can there be a feasible solution to
that immensely difficult, but succinctly formulated problem? In
general, no. But you can solve it on 'good enough' level by
experimenting with simulation. Reasonable. Yet it's strange to frame
it as something that is usually never done.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50292866-29991d


Re: [agi] Religion-free technical content

2007-10-05 Thread Mark Waser
Then state the base principles or the algorithm that generates them, 
without
ambiguity and without appealing to common sense.  Otherwise I have to 
believe

they are complex too.


Existence proof to disprove your I have to believe . . . . 

1.  Magically collect all members of the species.
2.  Magically fully inform them of all relevant details.
3.  Magically force them to select moral/ethical/friendly, neutral, or 
immoral/unethical/unfriendly.
4.  If 50% or less select immoral/unethical/unfriendly, then it's friendly. 
If 50% select immoral/unethical/unfriendly, then it's unfriendly.


Simple.  Unambiguous.  Impossible to implement.  (And not my proposal)

- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, October 04, 2007 7:26 PM
Subject: **SPAM** Re: [agi] Religion-free technical content



--- Mark Waser [EMAIL PROTECTED] wrote:

I'll repeat again since you don't seem to be paying attention to what I'm
saying -- The determination of whether a given action is friendly or
ethical or not is certainly complicated but the base principles are 
actually

pretty darn simple.


Then state the base principles or the algorithm that generates them, 
without
ambiguity and without appealing to common sense.  Otherwise I have to 
believe

they are complex too.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50329295-47e942


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Richard Loosemore

Vladimir Nesov wrote:

On 10/5/07, Richard Loosemore [EMAIL PROTECTED] wrote:

Vladimir Nesov wrote:

Richard,

It's a question of notation. Yes, you can sometimes formulate
difficult problems succinctly. GoL is just another formalism in which
it's possible. What does it have to do with anything?

It has to do with the argument in my paper.


Strictly speaking, it doesn't answer that question.


Can there ever be a scientific theory that
predicts all the interesting creatures given only the rules?


Which is equivalent to asking can there be a feasible solution to
that immensely difficult, but succinctly formulated problem? In
general, no. But you can solve it on 'good enough' level by
experimenting with simulation. Reasonable. Yet it's strange to frame
it as something that is usually never done.



Again, I have to say that this thread is about the specific use that I 
make, in my paper, of the Game of Life cellular automaton.


So, if you take a look at that question of mine that you quote above 
Can there ever be a scientific theory that predicts all the 
interesting creatures given only the rules?  when you respond 
with the words ... no, but ... everything that comes after the word 
no has no relevance in the context of my paper.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50332070-1dfd6b


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Vladimir Nesov
On 10/5/07, Richard Loosemore [EMAIL PROTECTED] wrote:
 Vladimir Nesov wrote:
  On 10/5/07, Richard Loosemore [EMAIL PROTECTED] wrote:
  Vladimir Nesov wrote:
  Richard,
 
  It's a question of notation. Yes, you can sometimes formulate
  difficult problems succinctly. GoL is just another formalism in which
  it's possible. What does it have to do with anything?
  It has to do with the argument in my paper.
 
  Strictly speaking, it doesn't answer that question.
 
  Can there ever be a scientific theory that
  predicts all the interesting creatures given only the rules?
 
  Which is equivalent to asking can there be a feasible solution to
  that immensely difficult, but succinctly formulated problem? In
  general, no. But you can solve it on 'good enough' level by
  experimenting with simulation. Reasonable. Yet it's strange to frame
  it as something that is usually never done.
 

 Again, I have to say that this thread is about the specific use that I
 make, in my paper, of the Game of Life cellular automaton.

 So, if you take a look at that question of mine that you quote above
 Can there ever be a scientific theory that predicts all the
 interesting creatures given only the rules?  when you respond
 with the words ... no, but ... everything that comes after the word
 no has no relevance in the context of my paper.

So, what does it exemplify exactly? That some problems can't be
solved? It's common knowledge too. If you can't solve the problem, all
you can do is to modify it so that resulting problem can be solved,
which is what 'good enough solution' refers to.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50336687-da8563


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Richard Loosemore

Vladimir Nesov wrote:

On 10/5/07, Richard Loosemore [EMAIL PROTECTED] wrote:

Vladimir Nesov wrote:

On 10/5/07, Richard Loosemore [EMAIL PROTECTED] wrote:

Vladimir Nesov wrote:

Richard,

It's a question of notation. Yes, you can sometimes formulate
difficult problems succinctly. GoL is just another formalism in which
it's possible. What does it have to do with anything?

It has to do with the argument in my paper.

Strictly speaking, it doesn't answer that question.


Can there ever be a scientific theory that
predicts all the interesting creatures given only the rules?

Which is equivalent to asking can there be a feasible solution to
that immensely difficult, but succinctly formulated problem? In
general, no. But you can solve it on 'good enough' level by
experimenting with simulation. Reasonable. Yet it's strange to frame
it as something that is usually never done.


Again, I have to say that this thread is about the specific use that I
make, in my paper, of the Game of Life cellular automaton.

So, if you take a look at that question of mine that you quote above
Can there ever be a scientific theory that predicts all the
interesting creatures given only the rules?  when you respond
with the words ... no, but ... everything that comes after the word
no has no relevance in the context of my paper.


So, what does it exemplify exactly? That some problems can't be
solved? It's common knowledge too. If you can't solve the problem, all
you can do is to modify it so that resulting problem can be solved,
which is what 'good enough solution' refers to.


Vladimir, you are asking me to give the entire argument that was in my 
paper.


I have already tried to summarize it several times on these lists in 
recent weeks.  The most recent summary was in a parallel post in this 
same thread, responding to Mike Dougherty.  Maybe you could frame any 
followup question in the context of that summary (or one of the others).



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50342276-434044


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Richard Loosemore

Mike Dougherty wrote:

On 10/4/07, Richard Loosemore [EMAIL PROTECTED] wrote:

All understood.  Remember, though, that the original reason for talking
about GoL was the question:  Can there ever be a scientific theory that
predicts all the interesting creatures given only the rules?

The question of getting something to recognize the existence of the
patterns is a good testbed, for sure.


Given finite rules about a finite world with an en effectively
unlimited resource, it seems that every interesting creature exists
as the subset of all permutations minus the noise that isn't
interesting.  The problem is in a provable definition of interesting
(which was earlier defined for example as 'cyclic')  Also, who is
willing to invest unlimited resource to exhaustively search a toy
domain?  Even if there were parallels that might lead to formalisms
applicable in a larger context, we would probably divert those
resources to other tasks.  I'm not sure this is a bad idea.  Perhaps
our human attention span is a defense measure against wasting life's
resources on searches that promise fitness without delivering useful
results.


I hear you, but let me quickly summarize the reason why I introduced GoL 
as an example.


I wanted to use GoL as a nice-and-simple example of a system whose 
overall behavior (in this case, the existence of certain patterns that 
are stable or interesting) seems impossible to predict from a 
knowledge of the rules.  I only wanted to use GoL to *illustrate* the 
general class, not because I was interested in GoL per se.


The important thing is that this idea (that there are some systems that 
show interesting, but unexplainable, behavior at the global level) has 
much greater depth and impact than people have previously thought.


In particular, it is important to observe that almost all of our science 
and engineering is based on observing/analyzing/explaining/building 
systems that are not in this class.


(Quick caveat:  actually, the distinction between the two types of 
system is not black and white, so pretty much all system do have a small 
amount of inexplicability to them.  But this does not affect the argument).


What is the conclusion to draw from this?  Well, when we look at what is 
going on in a system, there are certain characteristics that can lead us 
to suspect that a *significant* chunk of its global behaviors might turn 
out to be inexplicable in this way -- there are fingerprints that we can 
look out for.  Now, if you go out there into the world and look for 
systems that have those telltale fingerprints, you find that we would 
expect intelligent systems to be in this class.


Or, more precisely, we would expect that when AI engineers try to build 
systems that are (a) complete, and (b) have properly grounded learning 
mechanisms, the systems will be expected to be in this class.  This has 
a massive impact on the techniques we are using to do AI.  The more you 
think about the consequences of this fact, the more you realize that 
using the conventional techniques of engineering is virtually guaranteed 
not to work.  In fact, we would predict that AI engineers would make 
*some* progress, but whenever they tried to scale up or expand the scope 
of their systems they would find that things did not get much better, 
and we would expect that AI engineers would have great difficulty coming 
up with learning mechanisms that generated usable symbols from real 
world input.


So, while GoL itself is interesting, and all kinds of stuff can be said 
about it, most of that is not important to the core argument.



Richard Loosemore





In the case of RSI, the rules are not fixed.  I wouldn't dare call
them mathematical infinite, but an evolving ruleset probably should be
considered functionally unlimited.  I imagine Incompleteness applies
here, even if I don't know how to explicitly state it.  I believe
finding all of the interesting creatures is nearly impossible.
Finding an interesting creature should be possible given a
sufficiently exact definition of interesting.  After some amount of
search, the results probably have to be expressed as a confidence
metric like, given an exhaustive search of only 10% of the known
region, there we found N number of candidates that match the criteria
within X degree of freedom.  By assessment of the distribution of
candidates in the searched space, extrapolation suggests there may be
{prediction formula result} 'interesting creatures' in this universe

the Drake equation is an example of this kind of answer/function.
Ironic that it's purpose is to determine the number of intelligences
in our own universe.  Of course Fermi paradox, testable hypothesis,
etc. etc. - the point is not about whether GoL searches or SETI
searches are any more or less productive than each other.  My interest
is in how intelligences of any origin (natural human brains,
human-designed CPU, however improbable aliens) manage to find common
symbols in order to 

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Vladimir Nesov
On 10/5/07, Richard Loosemore [EMAIL PROTECTED] wrote:
 Mike Dougherty wrote:
  On 10/4/07, Richard Loosemore [EMAIL PROTECTED] wrote:
  All understood.  Remember, though, that the original reason for talking
  about GoL was the question:  Can there ever be a scientific theory that
  predicts all the interesting creatures given only the rules?
 
  The question of getting something to recognize the existence of the
  patterns is a good testbed, for sure.
 
  Given finite rules about a finite world with an en effectively
  unlimited resource, it seems that every interesting creature exists
  as the subset of all permutations minus the noise that isn't
  interesting.  The problem is in a provable definition of interesting
  (which was earlier defined for example as 'cyclic')  Also, who is
  willing to invest unlimited resource to exhaustively search a toy
  domain?  Even if there were parallels that might lead to formalisms
  applicable in a larger context, we would probably divert those
  resources to other tasks.  I'm not sure this is a bad idea.  Perhaps
  our human attention span is a defense measure against wasting life's
  resources on searches that promise fitness without delivering useful
  results.

 I hear you, but let me quickly summarize the reason why I introduced GoL
 as an example.

 I wanted to use GoL as a nice-and-simple example of a system whose
 overall behavior (in this case, the existence of certain patterns that
 are stable or interesting) seems impossible to predict from a
 knowledge of the rules.

You do predict that behavior by simulating the model. What you
supposedly can't do is to find initial conditions that will lead to
required global behavior. But you actually can - for example by
enumerating possible initial conditions in a brute force way and
looking at what happens when you simulate it. It's just very
inefficient, and as a result you can't enumerate many initial
conditions which will lead to interesting global behavior. And
probably there are tricks to get better results, by restricting search
space. You propose a framework which will help in efficient
enumeration of low-level rules and estimation of high-level behavior,
and restrain possibilities to as close as possible to existing working
system - human mind. All along these same lines. Computational
mathematics deals with this kind of thing all the time.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50383288-697f70


Re: [agi] Religion-free technical content

2007-10-05 Thread Matt Mahoney

--- Mark Waser [EMAIL PROTECTED] wrote:

  Then state the base principles or the algorithm that generates them, 
  without
  ambiguity and without appealing to common sense.  Otherwise I have to 
  believe
  they are complex too.
 
 Existence proof to disprove your I have to believe . . . . 
 
 1.  Magically collect all members of the species.
 2.  Magically fully inform them of all relevant details.
 3.  Magically force them to select moral/ethical/friendly, neutral, or 
 immoral/unethical/unfriendly.
 4.  If 50% or less select immoral/unethical/unfriendly, then it's friendly. 
 If 50% select immoral/unethical/unfriendly, then it's unfriendly.
 
 Simple.  Unambiguous.  Impossible to implement.  (And not my proposal)

Then I guess we are in perfect agreement.  Friendliness is what the average
person would do.  So how *would* you implement it?

 
 - Original Message - 
 From: Matt Mahoney [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Thursday, October 04, 2007 7:26 PM
 Subject: **SPAM** Re: [agi] Religion-free technical content
 
 
  --- Mark Waser [EMAIL PROTECTED] wrote:
  I'll repeat again since you don't seem to be paying attention to what I'm
  saying -- The determination of whether a given action is friendly or
  ethical or not is certainly complicated but the base principles are 
  actually
  pretty darn simple.
 
  Then state the base principles or the algorithm that generates them, 
  without
  ambiguity and without appealing to common sense.  Otherwise I have to 
  believe
  they are complex too.
 
 
  -- Matt Mahoney, [EMAIL PROTECTED]
 
  -
  This list is sponsored by AGIRI: http://www.agiri.org/email
  To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/?;
  
 
 
 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;
 


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50375599-b488f1


Re: [agi] Religion-free technical content

2007-10-05 Thread Mike Dougherty
On 10/5/07, Mark Waser [EMAIL PROTECTED] wrote:
  Then I guess we are in perfect agreement.  Friendliness is what the
  average
  person would do.

 Which one of the words in And not my proposal wasn't clear?  As far as I
 am concerned, friendliness is emphatically not what the average person would
 do.

Yeah - Computers already do what the average person would:  wait
expectantly to be told exactly what to do and how to behave.  I guess
it's a question of how cynically we define the average person.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50390046-8654d8


Re: [agi] Religion-free technical content

2007-10-05 Thread Mark Waser
Then I guess we are in perfect agreement.  Friendliness is what the 
average

person would do.


Which one of the words in And not my proposal wasn't clear?  As far as I 
am concerned, friendliness is emphatically not what the average person would 
do.



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Friday, October 05, 2007 10:40 AM
Subject: **SPAM** Re: [agi] Religion-free technical content




--- Mark Waser [EMAIL PROTECTED] wrote:


 Then state the base principles or the algorithm that generates them,
 without
 ambiguity and without appealing to common sense.  Otherwise I have to
 believe
 they are complex too.

Existence proof to disprove your I have to believe . . . . 

1.  Magically collect all members of the species.
2.  Magically fully inform them of all relevant details.
3.  Magically force them to select moral/ethical/friendly, neutral, or
immoral/unethical/unfriendly.
4.  If 50% or less select immoral/unethical/unfriendly, then it's 
friendly.

If 50% select immoral/unethical/unfriendly, then it's unfriendly.

Simple.  Unambiguous.  Impossible to implement.  (And not my proposal)


Then I guess we are in perfect agreement.  Friendliness is what the 
average

person would do.  So how *would* you implement it?



- Original Message - 
From: Matt Mahoney [EMAIL PROTECTED]

To: agi@v2.listbox.com
Sent: Thursday, October 04, 2007 7:26 PM
Subject: **SPAM** Re: [agi] Religion-free technical content


 --- Mark Waser [EMAIL PROTECTED] wrote:
 I'll repeat again since you don't seem to be paying attention to what 
 I'm

 saying -- The determination of whether a given action is friendly or
 ethical or not is certainly complicated but the base principles are
 actually
 pretty darn simple.

 Then state the base principles or the algorithm that generates them,
 without
 ambiguity and without appealing to common sense.  Otherwise I have to
 believe
 they are complex too.


 -- Matt Mahoney, [EMAIL PROTECTED]

 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50386329-c4a01e


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Richard Loosemore

Mike Dougherty wrote:

On 10/5/07, Richard Loosemore [EMAIL PROTECTED] wrote:

I hear you, but let me quickly summarize the reason why I introduced GoL
as an example.


Thank you.  I appreciate the confirmation of understanding my point.
I have observed many cases where the back and forth bickering over
email lists have been based in an unwillingness to concede an other's
point.  I am the first to admit that I have more questions than
answers.


I wanted to use GoL as a nice-and-simple example of a system whose
overall behavior (in this case, the existence of certain patterns that
are stable or interesting) seems impossible to predict from a
knowledge of the rules.  I only wanted to use GoL to *illustrate* the
general class, not because I was interested in GoL per se.


Gotcha - GoL is an example case of a class.  You threw it out there to
make a point.  Let's just say is the only symbol on the table.  In
order to assimilate the idea you are proposing, the model needs to be
examined.  So if we discuss this one example it is not to the
exclusion of the concept you're trying to illustrate, but a precursor
to it.  In my own concept formation, this step is like including
libraries or compiling a function.  I think sometimes you get
frustrated that it takes so long for people accomplish this step.
Part of the problem is that email is such a low bandwidth medium.
(another part is that the smarter we are, the quicker we get stuff
and we assume others should be as capable)


The important thing is that this idea (that there are some systems that
show interesting, but unexplainable, behavior at the global level) has
much greater depth and impact than people have previously thought.


Can you give an example of a ruleset that CAN be used to predict
global behavior?

interesting but unexplainable behavior - would you define this class
to include chaos or chaotic systems?  I'm trying to reason to the
general case, but I don't have enough other properties of the class in
mind to usefully visualize. (conceptualize?)  I think those
researchers who have invested in studying chaos are people who have
given this idea a great deal of depth and impact.  It's a hard problem
because our normal 'scientific' method fails almost by definition.  I
believe the framework you have discussed is a proposal for a method of
investigating this behavior.  Am I far off, or am I in the general
vicinity?


Thanks for this (what a relief to just communicate with someone in a 
relaxed way!).


About discussing GoL itself as the first example of the class, that's 
fine, but some of the simplicity of GoL can make it misleading -- there 
are so many things that have been said about it that we can easily get 
distracted by those.  (I am beginning to realize, now, that although it 
is a memorable example, these side effects have made it a pain to use 
for my example.  It is not that it is not a good example, just that it 
has so much baggage).


So, fire away with any questions about how GoL relates, and I'll try to 
say how they fit with what I was trying to say.


About your second question Can you give an example of a ruleset that 
CAN be used to predict global behavior?, well, the short answer is that 
you can choose any scientific explanation you want.  Ruleset must be 
understood as meaning low level equations or mechanisms that drive the 
system.


My stock example:  planetary motion.  Newton (actually Tycho Brahe, 
Kepler, et al) observed some global behavior in this system:  the orbits 
are elliptical and motion follows Kepler's other laws.  This corresponds 
to someone seeing Game of Life for the first time, without knowing how 
it works, and observing that the motion is not purely random, but seems 
to have some regular patterns in it.


Having noticed the global regularities, the next step, for Newton, was 
to try to find a compact explanation for them.  He was looking for the 
underlying rules, the low-level mechanisms.  He eventually realised (a 
long story of course!) that an inverse square law of gravitation would 
predict all of the behavior of these planets.  This corresponds to a 
hypothetical case in which a person seeing those Game of Life patterns 
would somehow deduce that the rules that must be giving rise to the 
patterns are the particular rules that appear in GoL.  And, to be 
convincing, they would have to prove that the rules gave rise to the 
behavior.


(Caveat:  we have to fuzz the analogy somewhat and say that the observer 
cannot simply look at the behavior of individual cells, but only see a 
hazy picture of the larger scale structures ... if they could see every 
cell clearly they could reason about the rules and deduce them.  This is 
a weakness in the analogy, but I am sure you will be able to imagine 
other circumstances in which, for some reason, the lowest level 
mechanisms are not nakedly apparent).


(To make this issue really stand out clearly, imagine that I pulled out 
a sheet of paper, drew a whole 

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Richard Loosemore

J Storrs Hall, PhD wrote:

On Thursday 04 October 2007 03:46:02 pm, Richard Loosemore wrote:

Oh, and, by the way, the widely accepted standard for what counts as a 
scientific theory is -- as any scientist will be able to tell you -- 
that it has to make its prediction without becoming larger and more 
complicated than the system under study, so it goes without saying that 
whatever you choose for a theory it is not allowed to simulate a massive 
number of Game of Life cases and simply home in on the cyclic ones.


Wrong. Consider the amount of data, cases, and simulation involved in, say, 
using density functional theory to make predictions about the shape of a 
fairly small molecule. 


Like many another sophist who cannot defend his argument, you quietly 
ignore 99.9% of cases and try to prove your point by selecting an 
outlier and trying to pretend that it represents the majority case.


What you just picked is a PRECISELY the residual, partial complexity in 
the explanations of molecular dynamics that I explained in a previous 
post:  the bulk of the explanation has to be done using a regular type 
of analytic explanation, but then there are always residual elements 
that are so nonlinear that all we can do is simulate.


Try walking into any physics department in the world and saying Is it 
okay if most theories are so complicated that they dwarf the size and 
complexity of the system that they purport to explain?


In fact, your example is beautiful, in a way.  So it turns out to be 
necessary to resort to approximate methods, to simulations, in order to 
deal with the MINUSCULE amout of nonlinearity/tangledness that exist in 
the interactions of the atoms in a small molecule?  Well, whoop-dee-do!! 
 Guess what the whole point of my paper was?  The point of that paper 
was that there is vastly more evidence for the existence of such 
nonlinear, tangled interactions in the case of intelligent systems, and 
there is so far NO analytic core theory to deal with the majority behavior.


So unlike the small-molecule case (where we can cope with the complexity 
because it just a residual, and we have a huge, rock-solid non-complex 
theory as a starting point), in the case of intelligent systems we are 
thrown into wildly different territory where the whole darned enterprise 
is dominated by the kind of science that was just a residual in the 
molecule case.


Thanks for the example.

Since your theory below does not predict the specific cyclic patterns 
that actually occur in GoL, I still await a complete theory and a 
complete catalogue of cyclic GoL lifeforms by Monday.




Richard Loosemore



If physicists lived in a Life universe, they would consider finding the CA 
rules the ultimate theory of everything. We don't have that for real 
physics, but Shrödinger's equation is similar for our purposes. DFT is a 
carefully tuned set of heuristics to make QM calcs tractable -- but if we had 
the horsepower, you can bet your bottom dollar that physicists would be using 
the real equations and calculating like mad. 


You didn't give me a formal definition of cyclic so I'll do it for you:

cyclic S :: exists p1,k=0 s.t. S[i+p]=S[i] forall ik

I trust you understand lazy evaluation:

lifeseries M :; M, lifeseries liferules M

We're using an APL-like data semantics with implicit parallelism over vector 
elements:


mats N = N N rho 2 baserep i for i=1..2^N^2 

Then your theory is 


cyclic lifeseries mats 2..Whatever

If you have a look at Gödel's proof, he built up to functions of about the 
level of Lisp from simple arithmetic. The above assumes a few more 
definitions, but there are 50 years of CS to draw them from.


Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50419506-74e0a6


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread William Pearson
On 05/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
 We have good reason to believe, after studying systems like GoL, that
 even if there exists a compact theory that would let us predict the
 patterns from the rules (equivalent to predicting planetary dynamics
 given the inverse square law of gravitation), such a theory is going to
 be so hard to discover that we may as well give up and say that it is a
 waste of time trying.  Heck, maybe it does exist, but that's not the
 point:  the point is that there appears to be little practical chance of
 finding it.


A few theories. All states which do not three live cells adjacent,
will become cyclic with a cycle length of 0. Or won't be cyclic if you
reject cycle lengths of 0. Similarly all patterns consisting of one or
more groups of three live cells in a row inside an otherwise empty 7x7
box will have a stable cycle.

Will there be a general theory? Nope, You can see that from GoL being
Turing complete. If you had a theory that could in general predict
what a set GoL pattern was going to do, you could rework it to tell if
a TM was going to halt.

My theories are mainly to illustrate what a science of GoL would look
like. Staying firmly in the comfort zone.

Let me rework something you wrote earlier.

I want to use the class of TM as a nice-and-simple example of a system whose
overall behavior (in this case, whether the system will halt or not)
is impossible to
predict from a knowledge of the state transitions and initial state of the tape.

Computer engineering has as much or as little complexity as the
engineer wants to deal with. They can stay in the comfort zone of
easily predictable systems, much like the one I illustrated exists for
GoL. Or they can walk on the wild side a bit. My postgrad degree was
done in a place which specialised in evolutionary computation (GA, GP
and LCS) where systems were mainly tested empirically. So perhaps my
view of what computer engineering is, is perhaps a little out of the
mainstream.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50437016-7ec2cc


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Richard Loosemore

Vladimir Nesov wrote:

On 10/5/07, Richard Loosemore [EMAIL PROTECTED] wrote:

Mike Dougherty wrote:

On 10/4/07, Richard Loosemore [EMAIL PROTECTED] wrote:

All understood.  Remember, though, that the original reason for talking
about GoL was the question:  Can there ever be a scientific theory that
predicts all the interesting creatures given only the rules?

The question of getting something to recognize the existence of the
patterns is a good testbed, for sure.

Given finite rules about a finite world with an en effectively
unlimited resource, it seems that every interesting creature exists
as the subset of all permutations minus the noise that isn't
interesting.  The problem is in a provable definition of interesting
(which was earlier defined for example as 'cyclic')  Also, who is
willing to invest unlimited resource to exhaustively search a toy
domain?  Even if there were parallels that might lead to formalisms
applicable in a larger context, we would probably divert those
resources to other tasks.  I'm not sure this is a bad idea.  Perhaps
our human attention span is a defense measure against wasting life's
resources on searches that promise fitness without delivering useful
results.

I hear you, but let me quickly summarize the reason why I introduced GoL
as an example.

I wanted to use GoL as a nice-and-simple example of a system whose
overall behavior (in this case, the existence of certain patterns that
are stable or interesting) seems impossible to predict from a
knowledge of the rules.


You do predict that behavior by simulating the model. What you
supposedly can't do is to find initial conditions that will lead to
required global behavior. But you actually can - for example by
enumerating possible initial conditions in a brute force way and
looking at what happens when you simulate it. It's just very
inefficient, and as a result you can't enumerate many initial
conditions which will lead to interesting global behavior. And
probably there are tricks to get better results, by restricting search
space. You propose a framework which will help in efficient
enumeration of low-level rules and estimation of high-level behavior,
and restrain possibilities to as close as possible to existing working
system - human mind. All along these same lines. Computational
mathematics deals with this kind of thing all the time.



Vladimir,

You keep taking this example out of context!   You are making statements 
that are completely oblivious to the actual purpose that the GoL example 
serves in the paper:  everything you say above is COMPLETELY impractical 
if it is generalized to systems more complex than GoL.


In short, your statements are complete non-sequiteurs.

This is about the fourth or fifth time that you have taken the thing out 
of context and then dismissed the whole thing with a comment like 
Computational mathematics deals with this kind of thing all the time.


Richard Loosemore






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50422121-b6ed8e


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Vladimir Nesov
On 10/5/07, Richard Loosemore [EMAIL PROTECTED] wrote:
 Vladimir Nesov wrote:
  On 10/5/07, Richard Loosemore [EMAIL PROTECTED] wrote:
  Mike Dougherty wrote:
  On 10/4/07, Richard Loosemore [EMAIL PROTECTED] wrote:
  All understood.  Remember, though, that the original reason for talking
  about GoL was the question:  Can there ever be a scientific theory that
  predicts all the interesting creatures given only the rules?
 
  The question of getting something to recognize the existence of the
  patterns is a good testbed, for sure.
  Given finite rules about a finite world with an en effectively
  unlimited resource, it seems that every interesting creature exists
  as the subset of all permutations minus the noise that isn't
  interesting.  The problem is in a provable definition of interesting
  (which was earlier defined for example as 'cyclic')  Also, who is
  willing to invest unlimited resource to exhaustively search a toy
  domain?  Even if there were parallels that might lead to formalisms
  applicable in a larger context, we would probably divert those
  resources to other tasks.  I'm not sure this is a bad idea.  Perhaps
  our human attention span is a defense measure against wasting life's
  resources on searches that promise fitness without delivering useful
  results.
  I hear you, but let me quickly summarize the reason why I introduced GoL
  as an example.
 
  I wanted to use GoL as a nice-and-simple example of a system whose
  overall behavior (in this case, the existence of certain patterns that
  are stable or interesting) seems impossible to predict from a
  knowledge of the rules.
 
  You do predict that behavior by simulating the model. What you
  supposedly can't do is to find initial conditions that will lead to
  required global behavior. But you actually can - for example by
  enumerating possible initial conditions in a brute force way and
  looking at what happens when you simulate it. It's just very
  inefficient, and as a result you can't enumerate many initial
  conditions which will lead to interesting global behavior. And
  probably there are tricks to get better results, by restricting search
  space. You propose a framework which will help in efficient
  enumeration of low-level rules and estimation of high-level behavior,
  and restrain possibilities to as close as possible to existing working
  system - human mind. All along these same lines. Computational
  mathematics deals with this kind of thing all the time.
 

 Vladimir,

 You keep taking this example out of context!   You are making statements
 that are completely oblivious to the actual purpose that the GoL example
 serves in the paper:

Given that this purpose is what I'm trying to understand, being
non-oblivious to it at the same time would be strange indeed.

 everything you say above is COMPLETELY impractical
 if it is generalized to systems more complex than GoL.

I disagree. It's not specific enough to be of practical use in itself,
but it's general enough to be a correct statement about practically
useful methods. Please don't misunderstand my intention: I find your
way of presenting technical content rather obscure, so I'm trying to
construct descriptions that apply to what you're describing, starting
from simple ones and if necessary adding details. So if they are
overly general, it's OK, but if they are wrong, please point out why.

 In short, your statements are complete non-sequiteurs.

They can be inadequate for purposes of discussion as you perceive it,
yes, but it in itself doesn't make them non-sequiturs. To assert
otherwise you need to point to specific details.

 This is about the fourth or fifth time that you have taken the thing out
 of context and then dismissed the whole thing with a comment like
 Computational mathematics deals with this kind of thing all the time.

It's not dismissal, it's specific statement about typicality of
approach I described. Which can happen to be an inadequate description
of what you do, but such statement in itself remains correct for what
it's applied to.

-- 
Vladimir Nesovmailto:[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50454857-e3a0b4


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread J Storrs Hall, PhD
On Friday 05 October 2007 12:13:32 pm, Richard Loosemore wrote:

 Try walking into any physics department in the world and saying Is it 
 okay if most theories are so complicated that they dwarf the size and 
 complexity of the system that they purport to explain?

You're conflating a theory and the mathematical mechanism necessary to apply 
it to actual situations. The theory in Newtonian physics can be specified as 
the equations F=ma and F=Gm1m2/r^2 (in vector form); but applying them 
requires a substantial amount of calculation.

You can't simply ignore the unusual case of chaotic motion, because the 
mathematical *reason* the system doesn't have a closed analytic solution is 
that chaos is possible.

 In fact, your example is beautiful, in a way.  So it turns out to be 
 necessary to resort to approximate methods, to simulations, in order to 
 deal with the MINUSCULE amout of nonlinearity/tangledness that exist in 
 the interactions of the atoms in a small molecule?  Well, whoop-dee-do!! 

Think again, Hammurabi. DFT is a quantum method that searches a space of 
linear combinations of basis functions to find a description of the electron 
density field in a molecular system. In other words, the charge of each 
electron is smeared over space in a pattern that has to satisfy Shrödinger's 
equation and also be at equilibrium with the force exerted on it by the 
charge distributions of each other electron. It's approximately like solving 
the Navier-Stokes equation for each of N different fluid flow problems 
simultaneously, under the constraint that each volume experienced a pressure 
field that was a function of the solution of every other one.

Given the solution to that system, you're in a position to evaluate the force 
on each nucleus, whereupon you can either take it one iteration of a 
molecular dynamics simulation, or one step of a conjugate gradients energy 
minimization -- and start out all over again with the electrons, which will 
have shifted, sometimes radically, due to the different forces from the 
nuclei.

Allow me to quote:

What you said above was pure, unalloyed bullshit:  an exquisite cocktail 
of complete technical ignorance, patronizing insults and breathtaking 
arrogance.

You did not understand word one...


Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50491496-da7692

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Richard Loosemore

William Pearson wrote:

On 05/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:

We have good reason to believe, after studying systems like GoL, that
even if there exists a compact theory that would let us predict the
patterns from the rules (equivalent to predicting planetary dynamics
given the inverse square law of gravitation), such a theory is going to
be so hard to discover that we may as well give up and say that it is a
waste of time trying.  Heck, maybe it does exist, but that's not the
point:  the point is that there appears to be little practical chance of
finding it.



A few theories. All states which do not three live cells adjacent,
will become cyclic with a cycle length of 0. Or won't be cyclic if you
reject cycle lengths of 0. Similarly all patterns consisting of one or
more groups of three live cells in a row inside an otherwise empty 7x7
box will have a stable cycle.

Will there be a general theory? Nope, You can see that from GoL being
Turing complete.

^^

Sorry, Will, but this not correct, and I explained the entire reason 
just yesterday, in a long and thorough post that was the beginning of 
this thread.  Just out of interest, did you read that one?



 If you had a theory that could in general predict

what a set GoL pattern was going to do, you could rework it to tell if
a TM was going to halt.

My theories are mainly to illustrate what a science of GoL would look
like. Staying firmly in the comfort zone.


But I stated exactly what I meant by a theory.  You are not addressing 
that issue at all in what you just said.



Let me rework something you wrote earlier.

I want to use the class of TM as a nice-and-simple example of a system whose
overall behavior (in this case, whether the system will halt or not)
is impossible to
predict from a knowledge of the state transitions and initial state of the tape.


This re-wording of the text I wrote has absolutely no relationship to 
the original meaning of the words.  You would have proved just as much 
if you had substituted the terms bagel and cream cheese into my text.




Computer engineering has as much or as little complexity as the
engineer wants to deal with.


Sadly, a completely meaningless statement, if the word complexity is 
used in the sense of complex system.




They can stay in the comfort zone of
easily predictable systems, much like the one I illustrated exists for
GoL. Or they can walk on the wild side a bit. My postgrad degree was
done in a place which specialised in evolutionary computation (GA, GP
and LCS) where systems were mainly tested empirically. So perhaps my
view of what computer engineering is, is perhaps a little out of the
mainstream.

 Will Pearson




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50522676-79f963


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread William Pearson
On 05/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
 William Pearson wrote:
  On 05/10/2007, Richard Loosemore [EMAIL PROTECTED] wrote:
  We have good reason to believe, after studying systems like GoL, that
  even if there exists a compact theory that would let us predict the
  patterns from the rules (equivalent to predicting planetary dynamics
  given the inverse square law of gravitation), such a theory is going to
  be so hard to discover that we may as well give up and say that it is a
  waste of time trying.  Heck, maybe it does exist, but that's not the
  point:  the point is that there appears to be little practical chance of
  finding it.
 
 
  A few theories. All states which do not three live cells adjacent,
  will become cyclic with a cycle length of 0. Or won't be cyclic if you
  reject cycle lengths of 0. Similarly all patterns consisting of one or
  more groups of three live cells in a row inside an otherwise empty 7x7
  box will have a stable cycle.
 
  Will there be a general theory? Nope, You can see that from GoL being
  Turing complete.
 ^^

 Sorry, Will, but this not correct, and I explained the entire reason
 just yesterday, in a long and thorough post that was the beginning of
 this thread.  Just out of interest, did you read that one?

Yup, and my argument is still valid, if this is the one you are
referring to. You said:

Now, finally:  if you choose the initial state of a GoL system very,
VERY carefully, it is possible to make a Turing machine.  So, in the
infinite set of GoL systems, a very small fraction of that set can be
made to implement a Turing machine.

But what does this have to do with explaining the existence of patterns
in the set of ALL POSSIBLE GoL systems??  So what if a few of those GoL
instances have a peculiar property?  bearing in mind the definition of
complexity I have stated above, how would it affect our attempts to
account for patterns that exist across the entire set?

You are asking about the whole space, my argument was to do with a sub
space admittedly. But any theory about the whole space must be valid
on all the sub spaces it contains. All we need to do is find a single
state that we can prove that we cannot predict how it evolves to say
we will never be able to find a theory for all states.

If it was possible to find a theory, by your definition, then we could
use that theory to predict the admittedly small set of states that
were TMs.

I might reply to the rest if I think we will get anywhere from it.

 Will

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50577306-861814


Re: [agi] Religion-free technical content

2007-10-05 Thread Matt Mahoney

--- Mike Dougherty [EMAIL PROTECTED] wrote:

 On 10/5/07, Mark Waser [EMAIL PROTECTED] wrote:
   Then I guess we are in perfect agreement.  Friendliness is what the
   average
   person would do.
 
  Which one of the words in And not my proposal wasn't clear?  As far as I
  am concerned, friendliness is emphatically not what the average person
 would
  do.
 
 Yeah - Computers already do what the average person would:  wait
 expectantly to be told exactly what to do and how to behave.  I guess
 it's a question of how cynically we define the average person.

Now you all know damn well what I was trying to say.  I thought only computers
were supposed to have this problem.


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50580206-f3a97b


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Mike Dougherty
On 10/5/07, Richard Loosemore [EMAIL PROTECTED] wrote:
 My stock example:  planetary motion.  Newton (actually Tycho Brahe,
 Kepler, et al) observed some global behavior in this system:  the orbits
 are elliptical and motion follows Kepler's other laws.  This corresponds
 to someone seeing Game of Life for the first time, without knowing how
 it works, and observing that the motion is not purely random, but seems
 to have some regular patterns in it.

 Having noticed the global regularities, the next step, for Newton, was
 to try to find a compact explanation for them.  He was looking for the
 underlying rules, the low-level mechanisms.  He eventually realised (a
 long story of course!) that an inverse square law of gravitation would
 predict all of the behavior of these planets.  This corresponds to a
 hypothetical case in which a person seeing those Game of Life patterns
 would somehow deduce that the rules that must be giving rise to the
 patterns are the particular rules that appear in GoL.  And, to be
 convincing, they would have to prove that the rules gave rise to the
 behavior.

with GoL you started with the rules and try to predict the behavior.
with planetary motion you observe the behavior and try to discover the rules.

Consider the observation of an oscillating spring or a bouncing ball.
There is an exact function to determine the high-school physics
version of these events.  Of course they always account for in a
frictionless vacuum or some other means of eliminating the damping
effects of the environment.  Is the basic function to compute the
trajectory of a launch sufficient to know where the shell will land?
On a windless day, probably.  In a stiff breeze, there may be
otherwise inexplicable behaviors.  Eliminating retrograde orbits
required a fundamental shift in perspective (literally changing the
center of the universe)

If there were a million-line CA world:  So it's a million lines, it'll
take more time but it's the same class of problem, no?  Or are we
talking about rules where one cell can modify it's own rules?  Isn't
that the crux of the RSI argument?  Imagine a GoL cell that
spontaneously gains the power to not die of loneliness until the round
after it's isolated.  Suppose also that this cell is able to confer
this ability to any cells that it spawns.  The GoL universe is
fundamentally changed.  Does the single evolved cell have to know the
other rules to add this one?  Have you ever played the drinking game
'asshole' ?  If the game goes on long enough, I doubt anyone can track
all of the rules :)  I digress.

Like those classic physics problems, we don't really need to have the
ideally compact formula to have a usefully working rule.  I think the
real intelligence is getting work done without a complete formula.
Otherwise it would be equivalent to our current computation- nobody is
getting excited about the bubblesort algorithm today.  I guess another
level of intelligence would be the leap from bubblesort to a recursive
method because a better O() efficiency.

.. gotta stop here because there's too much distraction around me to
think clearly.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50615645-82967d


Re: [agi] Religion-free technical content

2007-10-05 Thread Linas Vepstas
On Tue, Oct 02, 2007 at 03:03:35PM -0400, Mark Waser wrote:
 Do you really think you can show an example of a true moral universal?
 
 Thou shalt not destroy the universe.
 Thou shalt not kill every living and/or sentient being including yourself.
 Thou shalt not kill every living and/or sentient except yourself.

What if you discover a sub-stratum alternate-universe thingy that
you beleive will be better, but it requires the destruction of this
universe to create? What if you discover that there is a god, and
that this universe is a kind of cancer or illness in god?

(Disclaimer: I did not come up with this; its from some sci-fi 
book I read as a teen.)

Whoops.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50615643-d29c68


Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-05 Thread Linas Vepstas
On Thu, Oct 04, 2007 at 07:49:20AM -0400, Richard Loosemore wrote:
 
 As to exactly how, I don't know, but since the AGI is, by assumption, 
 peaceful, friendly and non-violent, it will do it in a peaceful, 
 friendly and non-violent manner.

I like to think of myself as peaceful and non-violent, but others
have occasionally challenged my self-image.

I have also know folks who are physically non-violent, and yet are
emotionally controlling monsters.

For the most part, modern western culture espouses and hews to 
physical non-violence. However, modern right-leaning pure capitalism
advocates not only social Darwinism, but also the economic equivalent
of rape and murder -- a jungle ethic where only the fittest survive,
while thousands can loose jobs, income, housing, etc. thanks to the
natural forces of capitalism.

So.. will a friendly AI also be a radical left-wing economic socialist ??

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50633201-155b36


Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-05 Thread a

Linas Vepstas wrote:

On Thu, Oct 04, 2007 at 07:49:20AM -0400, Richard Loosemore wrote:
  
As to exactly how, I don't know, but since the AGI is, by assumption, 
peaceful, friendly and non-violent, it will do it in a peaceful, 
friendly and non-violent manner.



I like to think of myself as peaceful and non-violent, but others
have occasionally challenged my self-image.

I have also know folks who are physically non-violent, and yet are
emotionally controlling monsters.

For the most part, modern western culture espouses and hews to 
physical non-violence. However, modern right-leaning pure capitalism

advocates not only social Darwinism, but also the economic equivalent
of rape and murder -- a jungle ethic where only the fittest survive,
while thousands can loose jobs, income, housing, etc. thanks to the
natural forces of capitalism.
  
This, anyway, is a common misunderstanding of capitalism.  I suggest you 
to read more about economic libertarianism.

So.. will a friendly AI also be a radical left-wing economic socialist ??
  
Yes, if you define it to be. friendly AI would get the best of both 
utopian socialism and capitalism. It would get the anti-coercive nature 
of capitalism and the utopia of utopian socialism.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50659417-dd373e


Re: [agi] Religion-free technical content breaking the small hardware mindset

2007-10-05 Thread Linas Vepstas
On Wed, Oct 03, 2007 at 08:39:18PM -0400, Edward W. Porter wrote:
 the
 IQ bell curve is not going down.  The evidence is its going up.  

So that's why us old folks 'r gettin' stupider as compared to 
them's young'uns.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50669278-fabe77


Economic libertarianism [was Re: The first-to-market effect [WAS Re: [agi] Religion-free technical content]

2007-10-05 Thread Linas Vepstas
OK, this is very off-topic. Sorry.

On Fri, Oct 05, 2007 at 06:36:34PM -0400, a wrote:
 Linas Vepstas wrote:
 For the most part, modern western culture espouses and hews to 
 physical non-violence. However, modern right-leaning pure capitalism
 advocates not only social Darwinism, but also the economic equivalent
 of rape and murder -- a jungle ethic where only the fittest survive,
 while thousands can loose jobs, income, housing, etc. thanks to the
 natural forces of capitalism.
   
 This, anyway, is a common misunderstanding of capitalism.  I suggest you 
 to read more about economic libertarianism.

My objection to economic libertarianism is its lack of discussion of
self-organized criticality.  A common example of self-organized
criticality is a sand-pile at the critical point.  Adding one grain
of sand can trigger an avalanche, which can be small, or maybe
(unboundedly) large. Despite avalanches, a sand-pile will maintain its 
critical shape (a cone at some angle).

The concern is that a self-organized economy is almost by definition 
always operating at the critical point, sloughing off excess production,
encouraging new demand, etc. Small or even medium-sized re-organizations
of the economy are good for it: it maintains the economy at its critical
shape, its free-market-optimal shape. Nothing wrong with that free-market
optimal shape, most everyone agrees.

The issue is that there's no safety net protecting against avalanches 
of unbounded size. The other issue is that its not grains of sand, its
people.  My bank-account and my brains can insulate me from small shocks.
I'd like to have protection against the bigger forces that can wipe me 
out.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50672693-e11dc1


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Linas Vepstas
On Thu, Oct 04, 2007 at 11:06:11AM -0400, Richard Loosemore wrote:
 
 In case anyone else wonders about the same question, I will explain why 
 the Turing machine equivalence has no relevance at all.

Re-read what you wrote, substituting the phrase Turing machine, for
each and every occurrance of the phrase GoL.  The semantics of the 
resulting text is unchanged, and states nothing particularly unique 
or original that isn't already (well-)known about Turing machines.

You can even substitute finite state machine or pushdown automaton
at every point, and you argument would still be unchanged (although
the result would not actually be Turing complete). That's because
some finite automata are boring (cyclic in trivial ways), and some 
are interesting (generating potentially large and complex patterns).
Most randomly generated finite automata will be simple, i.e. boring,
and some will exhibit surprisingly complex behaviours.  

To be abstract, you could subsitute semi-Thue system, context-free
grammar, first-order logic, Lindenmeyer system, history monoid,
etc. for GoL, and still get an equivalent argument about complexity 
and predicatability.  Singling out GoL as somehow special is a red 
herring; the complexity properties you describe are shared by a variety 
of systems and logics.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50680105-8a286e


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Linas Vepstas
On Fri, Oct 05, 2007 at 01:39:51PM -0400, J Storrs Hall, PhD wrote:
 On Friday 05 October 2007 12:13:32 pm, Richard Loosemore wrote:
 
  Try walking into any physics department in the world and saying Is it 
  okay if most theories are so complicated that they dwarf the size and 
  complexity of the system that they purport to explain?
 
 You're conflating a theory and the mathematical mechanism necessary to apply 
 it to actual situations. The theory in Newtonian physics can be specified as 
 the equations F=ma and F=Gm1m2/r^2 (in vector form); but applying them 
 requires a substantial amount of calculation.
 
 You can't simply ignore the unusual case of chaotic motion, because the 
 mathematical *reason* the system doesn't have a closed analytic solution is 
 that chaos is possible.

To amplify: the rules for GoL are simple. The finding what they imply
are not. The rues for gravity are simple. Finding what they impl are
not.

If I have a bunch of widely-separated GoL gliders flying along, then the 
analytic theory for explaining them is near-trivial: they glide along
in straight lines.  Kind-a like Newtonian linear motion.  Ergo, I can
deduce that a very common case has an analytically-trivial solution.

For the few times that gliders might collide, well, that's more
complicated. But this is a corner-case, it's infrequent. Like collisions
between planets, it can be handled as a special case. I mean, heck, 
there's only so many different ways a pair of glider can collide, and 
essentialy all of the collisions are fatal to both gliders. So, by this 
reasoning, GoL must be a low-complexity system. 

Compare this example to, for example, taking millions randomly-sized
gravitating bodies, and jamming them into a small volume, so that
they're very close to one-another (i.e. hot).  Now, the laws of 
gravitational motion are simple.  Predicting what will happen is not.

If, instead of using the solar system as an example, you used a globular
cluster, and if, instead of using a high-density starting positon for
GoL, you started GoL with one sun and nine planet-gliders zooming
around, you could invert the argument on its head. Prediciting gliders 
is trivially easy, predicting globular clusters is barely computationally
tractable, and is complex. I think its even proven Turing-complete, 
up to a rather subtle and controversial argument about grazing collisions,
but perhaps I misunderstood.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50687710-42a20d


[agi] Schemata

2007-10-05 Thread J Storrs Hall, PhD
On Thursday 04 October 2007 05:19:29 pm, Edward W. Porter wrote:

 I have no idea how new the idea is.  When Schank was talking about 
scripts ...

From the MIT Encyclopedia of the Cognitive Sciences (p729):

Schemata are the psychological constructs that are postulated to account for 
the molar forms of human generic knowledge. The term *frames*, as introduced 
by Marvin Minsky (1975) is essentially synonymous, except that Minsky used 
frame as both a psychological construct and a construct in artificial 
intelligence. *Scripts* are the subclass of schemata that are used to account 
for generic (stereotyped) sequences of actions (Schank and Abelson 1977).

Read on to find that Minsky, having read the work of a 1930s British 
psychologist Bartlett in the 30s which had languished in obscurity in the 
meantime, did reintroduce the concept to cog sci in the mid 70s with his 
frame paper.

Josh

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50702864-107b56

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Mike Dougherty
On 10/5/07, Linas Vepstas [EMAIL PROTECTED] wrote:
 To be abstract, you could subsitute semi-Thue system, context-free
 grammar, first-order logic, Lindenmeyer system, history monoid,
 etc. for GoL, and still get an equivalent argument about complexity
 and predicatability.  Singling out GoL as somehow special is a red
 herring; the complexity properties you describe are shared by a variety
 of systems and logics.

So you are agreeing with Richard using confrontational language?

Richard's point to me earlier was exactly this issue about GoL.
Perhaps this was because I bit down hard on some extremely simple
case that I have had some experience (unlike many of the lengthy
graduate papers discussed here)  You could equally substitute
gibberish words for GoL and 'get an equivalent argument' because the
discussion is about the properties of the entire class rather than an
specific instance.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50699248-61f722


Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Andrew Babian
Honestly, it seems to me pretty clearly that whatever Richard's thing is with
complexity being the secret sauce for intelligence and therefore everyone
having it wrong is just foolishness.  I've quit paying him any mind.  Everyone
has his own foolishness.  We just wait for the demos.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50720641-1f7528


RE: [agi] Religion-free technical content breaking the small hardware mindset

2007-10-05 Thread Edward W. Porter
It's also because the average person looses 10 points in IQ between mid
twenties and mid fourties and another ten points between mid fourties and
sixty.  (Help! I'am 59.)  

But this is just the average.  Some people hang on to their marbles as
they age better than others.  And knowledge gained with age can, to some
extent, compensate for less raw computational power.  

The book in which I read this said they age norm IQ tests (presumably to
keep from offending the people older than mid-forties who presumably
largely control most of society's institutions, including the purchase of
IQ tests.)

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-Original Message-
From: Linas Vepstas [mailto:[EMAIL PROTECTED] 
Sent: Friday, October 05, 2007 7:31 PM
To: agi@v2.listbox.com
Subject: Re: [agi] Religion-free technical content  breaking the small
hardware mindset


On Wed, Oct 03, 2007 at 08:39:18PM -0400, Edward W. Porter wrote:
 the
 IQ bell curve is not going down.  The evidence is its going up.

So that's why us old folks 'r gettin' stupider as compared to 
them's young'uns.

--linas

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50724257-8e390c


[agi] Do the inference rules of categorical logic make sense?

2007-10-05 Thread Edward W. Porter

I am trying to understand categorical logic from reading Pei Wang’s very
interesting paper, “ A Logic of Categorization.”  Since I am a total
newbie to the field I have some probably dumb questions.  But at the risk
of making a fool of myself let me ask them to members of the list.

Lets use “--” as the arrow symbol commonly used to represent an
inheritance relation of the type used in categorical logic, where A -- B,
roughly means category A is a species (or instance) of category B.
Category B, in addition to what we might normally think as a
generalization, can also be a property (meaning B’s category would be that
of concepts having property B).

I understand how the deduction inference rule works.

DEDUCTION INFERENCE RULE:
 Given S -- M and M-- P, this implies S -- P

This make total sense.  If S is a type of M, and M is a type of P, S is a
type of P.

But I don’t understand the rules for induction and abduction which are as
following:

ABDUCTION INFERENCE RULE:
 Given S -- M and P -- M, this implies S -- P to some degree

INDUCTION INFERENCE RULE:
 Given M -- S and M -- P, this implies S -- P to some degree

The problem I have is that in both the abduction and induction rule --
unlike in the deduction rule -- the roles of S and P appear to be
semantically identical, i.e., they could be switched in the two premises
with no apparent change in meaning, and yet in the conclusion switching S
and P would change in meaning.  Thus, it appears that from premises which
appear to make no distinctions between S and P a conclusion is drawn that
does make such a distinction.  At least to me, with my current limited
knowledge of the subject, this seems illogical.

It would appear to me that both the Abduction and Induction inference
rules should imply each of the following, each with some degree of
evidentiary value
 S -- P
 P -- S,  and
 S -- P, where “--” represents a similarity relation.

Since these rules have been around for years I assume the rules are right
and my understanding is wrong.

I would appreciate it if someone on the list with more knowledge of the
subject than I could point out my presumed error.

Edward W. Porter
Porter  Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50726265-cee19c

Re: [agi] Conway's Game of Life and Turing machine equivalence

2007-10-05 Thread Jean-paul Van Belle
All interesting (and complex!) phenomena happen at the edges/fringe. Boundary 
conditions seem to be a requisite for complexity. Life originated on a planet 
(10E-10 of space), on its surface (10E-10 of its volume). 99.99+% of the 
fractal curve area is boring, it's just the edges of a very small area that's 
particularly interesting. 99.99% of life is not intelligent. 99.9% of 
possible computer programs are completely uninteresting. Hence 99.% of 
glider configurations will be completely uninteresting and utterly boring. Most 
of Wolfram's rules produce boring, predictable patterns too.
=Jean-Paul
-- 



 On 2007/10/06 at 02:52, in message [EMAIL PROTECTED],
Linas Vepstas [EMAIL PROTECTED] wrote:
 For the few times that gliders might collide, well, that's more
 complicated. But this is a corner-case, it's infrequent. Like collisions
 between planets, it can be handled as a special case. I mean, heck, 
 there's only so many different ways a pair of glider can collide, and 
 essentialy all of the collisions are fatal to both gliders. So, by this 
 reasoning, GoL must be a low-complexity system. 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=50732414-a6538f