Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-23 Thread James Ratcliff


David Butler [EMAIL PROTECTED] wrote:Would two AGI's with the same initial 
learning program, same hardware in a controlled environment (same access to a 
specific learning base-something like an encyclopedia) learn at different rates 
and excel in different tasks? 

How would an AGI choose which things to learn first if given enough  data so 
that it would have to make a choice? If two AGI's (again-same  hardware, 
learning programs and controlled environment) were given  the same data would 
they make different choices?
Yes, any two exact copies of AGI's would learn at different rates, and learn 
different things, its one of the cores and basic easy things that can be 
programmed into an AGI, *with ease*.

To make a decision ever, an AGI takes all pertinent information, calculates it 
over its choices, and makes the choice with highest value, when any choice has 
approx the same value, it needs a tie-breaker.  If the machines just choose the 
First choice every time, then they will always act the same given the same 
input.  But a simple random number generator can tell the AGI, ok now I want to 
read encycolpedia A or Z, the first may choose A, the second Z, and they 
diverge from there.

This simple concept is very useful for making an AGI explore an alternative 
choice to the one they are given, perhaps allowing them to choose a seperate 
path to a goal that provides them with unique or new information, or providing 
a creative answer to a problem.

Im not sure what game it is from, but a VR AI program had a problem of 
attacking someone as they walked outside the house, and normally it was 
expected to go out the door and attack.  On one iteration though, to dove and 
rolled thru the window and attacked instead.  This was unexpected and creative 
solution to the problem at hand.

James Ratcliff


___
James Ratcliff - http://falazar.com
Looking for something...
   
-
Be a better friend, newshound, and know-it-all with Yahoo! Mobile.  Try it now.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=88965806-34e98c

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Robert Wensman
Mike Tinter,

If you really do not think that digital computers can be creative by
definition, I do not understand why you would like to join a mailing list
with AGI researchers? Computers operate by using software, thus, they need
to be programmed. It just seems to me that you do not understand what the
word program means. Even if you use use a computer that do not need to be
loaded with a program, guess what, such a computer could be considered to
have an initial program.

The very determinism of the universe implicates that everything runs
according to a program, including your ramblings here about creativity. I
have to ask you a question, do you think the universe and everything in it
runs according to deterministic laws of nature? Do you accept that you are a
part of this deterministic reality? Well, in that case Ive got news for you,
you are a program also! As evidence I would present your DNA, a program
encoded and stored in molecular structures.

Have you ever heard of computational equivalence? Do you know what it means?

Also, I feel annoyed that you compare the Novamente architecture with
something that just takes instructions, like do this, do that, then do
this etc. It seems you need to spend greater effort in studying this
architecture, for example by reading The Hidden Pattern.

I feel you are in great need of widening your mind to understand chaotic or
fractal processes. Take a forest for example, even in all its complexity and
diversity, it is still governed by very simple and basic laws namely the
laws of nature. By mimicking some of these laws at an appropriate level,
such as shape level, programmers can create forests that to a very large
extent looks like real forests:  http://www.speedtree.com/. A generator such
as speedtree could generate entire forests of miles and miles of trees, with
no single two trees looking the same. Even though the lines of code
producing the trees are pretty simple, the outcome in creativity and
originality is vast.

The same thing applies to a human mind. Even though the output of a human
mind is amazingly diverse and creative, its program is still goverened by
the basic laws of nature, and the DNA program. What AGI designers tries to
do is to is to mimic this process.

The concepts of program and determinism are pretty well established within
the scientific community, please do not try to redefine them like you do. It
just creates a lot of confusion. I think what you really want to use is the
concept of adaptability, or maybe you could say you want an AGI system that
is *programmed in an indirect way* (meaning that the program instructions
are very far away from what the system actually does). But please do not say
things like we should write AGI systems that are not programmed. It hurts
my ears/eyes.

/Robert Wensman



2008/1/7, Mike Tintner [EMAIL PROTECTED]:

 Well we (Penrose  co) are all headed in roughly the same direction, but
 we're taking different routes.

 If you really want the discussion to continue, I think you have to put out
 something of your own approach here to spontaneous creativity (your
 terms)
 as requested.

 Yes, I still see the mind as following instructions a la briefing, but
 only odd ones, not a whole rigid set of them., a la programs. And the
 instructions are open-ended and non-deterministically open to
 interpretation, just as my briefing/instruction to you - Ben go and get
 me
 something nice for supper - is. Oh, and the instructions that drive us,
 i.e. emotions, are always conflicting, e.g [Ben:] I might like to.. but
 do
 I really want to get that bastard anything for supper? Or have the time
 to,
 when I am on the very verge of creating my stupendous AGI?

 Listen, I can go on and on - the big initial deal is the claim that the
 mind
 isn't -  no successful AGI can be - driven by a program, or thoroughgoing
 SERIES/SET of instructions - if it is to solve even minimal general
 adaptive, let alone hard creative problems. No structured approach will
 work
 for an ill-structured problem.

 You must give some indication of how you think a program CAN be generally
 adaptive/ creative - or, I would argue, squares (programs are so square,
 man) can be circled :).

  Mike,
 
  The short answer is that I don't believe that computer *programs* can
 be
  creative in the hard sense, because they presuppose a line of enquiry,
 a
  predetermined approach to a problem -
  ...
  But I see no reason why computers couldn't be briefed rather than
  programmed, and freely associate across domains rather than working
 along
  predetermined lines.
 
  But the computer that is being briefed is still running some software
  program,
  hence is still programmed -- and its responses are still determined by
  that program (in conjunction w/ the environment, which however it
  perceives
  only thru a digital bit stream)
 
  I don't however believe that purely *digital* computers are capable of
  all
  the literally imaginative powers (as already 

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread William Pearson
On 07/01/2008, Robert Wensman [EMAIL PROTECTED] wrote:
 I think what you really want to use is the
 concept of adaptability, or maybe you could say you want an AGI system that
 is programmed in an indirect way (meaning that the program instructions are
 very far away from what the system actually does). But please do not say
 things like we should write AGI systems that are not programmed. It hurts
 my ears/eyes.

 /Robert Wensman


I'd agree that Mike could do with tightening up his language. I wonder
if he would agree with the following?

The programs that determine the way system acts and changes is not
highly related to the programming provided by the AI designer.

Computer systems like this have been designed. All desktop computers
can act, solve  problems and change their programming (apt etc) in
ways un-envisaged by the people who designed the hardware and BIOS.

This approach still allows the programs the AI designer provided to
have influence in *which* programs exist in the system, if not how
they exactly they work. This is what would make it different from
current computer systems.

 Will Pearson

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82558458-0ed659


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Benjamin Goertzel
On Jan 7, 2008 9:12 AM, Mike Tintner [EMAIL PROTECTED] wrote:


 Robert,

 Look, the basic reality is that computers have NOT yet been creative in any
 significant way, and have NOT yet achieved AGI - general intelligence, - or
 indeed any significant rulebreaking adaptivity; (If you disagree, please
 provide examples. Ben keeps claiming/implying he's solved them or made
 significant advances, but when pressed never provides any indication of
 how).

We all agree that AGI is not yet achieved.

Space travel to Proxima Centauri is also not yet achieved, nor is human
cloning ... there is a big difference in science between

-- not yet achieved, but seems possible based on available knowledge

and

-- doesn't seem possible based on available knowledge

 If you are truly serious about solving these problems, I suggest, you should
 prepared to be hurt - you should be ready to consider truly radical ideas
 - for the ground on which you stand to be questioned - and be seriously
 shaken up. You should WELCOME any and all of your assumptions being
 questioned. Even if, let's say, what I or someone else suggests is in the
 end nutty, drastic ideas are good for you to contemplate at least for a
 while.

Most of us on this list are already aware of the possibility that it
is not possible
to achieve high levels of intelligence using digital computer programs, given
realistic space and time constraints.

It is scientifically possible that Penrose is right, and to achieve human-like
levels of intelligence in a machine, one needs to use a machine making use
of weird, as yet poorly understood quantum gravity effects.

However, at present, that Penrose-ean hypothesis does not seem that likely
to most of us on this list; and given the current state of science, it's not a
hypothesis that we really can explore in detail.  Quantum gravity is
in a confused
state and quantum computing (let alone quantum gravity computing) is
in its infancy.

There is also always the possibility that the whole modern scientific world-view
is deeply flawed in a way that is relevant to AGI.  Maybe digital computers are
unable to lead to human-level AI, for some reason totally unrelated to
computability
theory and quantum gravity and all that.  There is plenty in the world
that we don't
understand -- I recommend Damien Broderick's recent and excellent book
Outside the Gates of Science for anyone who doesn't agree

But, this list is devoted to exploring the hypothesis that AGI **can**
be achieved
via creating intelligent machines -- and mainly, at the moment, to the
hypothesis that
it can be achieved via creating intelligent digital computer programs.

We realize this hypothesis may be wrong, but it seems likely enough to
us to merit
a lot of attention and effort aimed at validation.

Your supposed arguments against the hypothesis are nowhere near as original
as you seem to think, and nearly everyone on this list has heard them before and
not found them convincing.  I read What Computers Can't Do by Hubert Dreyfus
as a child in the 1970's and your diatribes don't seem to add anything to what
he said there.

If you think the whole digital-computer-AGI pursuit is a wrong
direction and a waste
of time, that's fine.  But why do you feel the need to keep repeatedly
informing us
of this fact?

For instance, I think string theory is probably wrong.  But I don't
see any point in
spending my time trolling on string theory email lists and harping on this point
repeatedly and confusingly.  Let them explore their hypothesis...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82594941-c3bbc7


Can Computers Be Creative? [WAS Re: [agi] A Simple Mathematical Test of Cog Sci.]

2008-01-07 Thread Richard Loosemore

Mike,

This discussion is just another a repetition of a common fallacy, namely 
that computers cannot be creative (or flexible, adaptive, original etc.) 
because they are programmed.


The fallacy can be illustrated by considering the following set of 
situations.


1) If I tell a child how to solve a calculus problem by giving them 
explicit steps to manipulate the symbols, they are not really doing the 
problem, they are just blindly doing what I am telling them to.  The 
child is just following a program written by me.


2) If I tell a child some general rules for solving calculus problems, 
but let the child figure out which rules map onto the particular problem 
at hand, then the child is now doing some work, but still they don't 
understand calculus.


3) If I tell the child some of the background behind the general rules 
for solving calculus problems, things start to become a little less 
clear.  If the child simply memorizes the rules and the background, and 
can recite them parrot fashion, do they actually understand?  Probably 
not.  Under those circumstances it might still be true to say that I am 
the one solving the problem, and the child is just following my program 
by rote.


4) If I explicitly teach a child all about mathematics (I am the math 
teacher), so they can see the linkages between all the different aspects 
of math that relate to calculus, and if the child now knows about 
calculus, then surely we would say that they understand and can 
creatively solve problems?


5) If I teach a child how to *learn* in a completely general way, and 
then give them a math book, and the child uses their learning skills to 
acquire a comprehensive knowledge of mathematics, including calculus, 
and if they do this so well that they understand the complete 
foundations of the field and can do research of their own, is it the 
case that the child is just following a program that I taught it 
(because I taught it everything about *how* to learn)?


The problem is that people who make the claim that computers are not 
creative see the relationship between programmers and computers as like 
situation (1) above, when in fact it is like (5).  For example, you say 
below:


 A *program* is a prior series or set of instructions that shapes and
 determines an agent's sequence of actions. A precise itinerary for a
 journey...

The crucial thing is that THERE ARE DIFFERENT KINDS OF PROGRAMS, and 
some programs are like (1) above.  But you are mistaking the fact that 
some are like (1) for the fact that all of them are.


It is completely false to assume that programs in general have that 
kind of simplistic relationship between [code] and [performance carried 
out by the code].


In particular, my type of AI (and Ben's, and others who are attempting 
full-blown AGI) is at least as complex as the type (5) above.  And for 
just the same reason that it would be false to say that a child that can 
do mathematics is just following the rules of their parents and 
kindergarten teacher (who arguably knew nothing about math, but who 
maybe did teach the child how to be a good learner), so it is completely 
false to say that a program is just a sequence of instructions that 
determines a computer's sequence of actions.  The program may simply 
determine how the computer goes about the process of learning about the 
world  while everything from there on out is not explicitly 
determined by the program, at all.


The relationship between program and actual performance can be 
*incredibly* subtle, and sensitive to enormous numbers of factors ... so 
many factors that, in practice, it is not possible to say exactly why 
the computer did a particular thing.  And when it gets to that level of 
complexity, a naive observer might say the computer is being creative. 
 Indeed it is being creative  in just the same way that a few 
billion neurons can also be creative.




Richard Loosemore



Mike Tintner wrote:

Ben,

Sounds like you may have missed the whole point of the test - though I 
mean no negative comment by that - it's all a question of communication.


A *program* is a prior series or set of instructions that shapes and 
determines an agent's sequence of actions. A precise itinerary for a 
journey. Even if the programmer doesn't have a full but only a very 
partial vision of that eventual sequence or itinerary.  (The agent of 
course can be either the human mind or a computer).


If the mind works by *free composition,* then it works v. differently - 
though this is an idea that has still to be fleshed out, and could take 
many forms. The first crucial difference is that there is NO PRIOR 
SERIES OR SET OF INSTRUCTIONS - saves a helluva lot on both space and 
programming work. Rather the mind works principally by free association 
- making up that sequence of actions/ journey AS IT GOES ALONG. So my 
very crude idea of this is you start, say, with a feeling of hunger, 
which = go get food.  And 

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Mike Tintner
Robert,

Look, the basic reality is that computers have NOT yet been creative in any 
significant way, and have NOT yet achieved AGI - general intelligence, - or 
indeed any significant rulebreaking adaptivity; (If you disagree, please 
provide examples. Ben keeps claiming/implying he's solved them or made 
significant advances, but when pressed never provides any indication of how).

These are completely unsolved problems. Major creative problems.

And I would suggest you have to be prepared for the solutions to be 
revolutionary and groundshaking.

If you are truly serious about solving these problems, I suggest, you should 
prepared to be hurt - you should be ready to consider truly radical ideas - 
for the ground on which you stand to be questioned - and be seriously shaken 
up. You should WELCOME any and all of your assumptions being questioned. Even 
if, let's say, what I or someone else suggests is in the end nutty, drastic 
ideas are good for you to contemplate at least for a while. 

Having said all this, I accept that what I have been saying offends this 
community -  I wasn't trying originally to push it, I got dragged into some of 
that last discussion.by Ben. And I also accept that most of you are not 
interested in going for the revolutionary,  from whatever source. And  I shall 
try to restrict my comments unless someone wishes to engage with me - although 
BTW I am ever more confident of my broad philosophical/ psychological position 
- the mind really doesn't work that way.

I may possibly make one last related post in the not too distant future about 
the nature of problems, and which are/aren't suitable for programs - but just 
ignore it.


  Mike Tinter,

  If you really do not think that digital computers can be creative by 
definition, I do not understand why you would like to join a mailing list with 
AGI researchers? Computers operate by using software, thus, they need to be 
programmed. It just seems to me that you do not understand what the word 
program means. Even if you use use a computer that do not need to be loaded 
with a program, guess what, such a computer could be considered to have an 
initial program.   

  The very determinism of the universe implicates that everything runs 
according to a program, including your ramblings here about creativity. I have 
to ask you a question, do you think the universe and everything in it runs 
according to deterministic laws of nature? Do you accept that you are a part of 
this deterministic reality? Well, in that case Ive got news for you, you are a 
program also! As evidence I would present your DNA, a program encoded and 
stored in molecular structures. 

  Have you ever heard of computational equivalence? Do you know what it means?

  Also, I feel annoyed that you compare the Novamente architecture with 
something that just takes instructions, like do this, do that, then do this 
etc. It seems you need to spend greater effort in studying this architecture, 
for example by reading The Hidden Pattern. 

  I feel you are in great need of widening your mind to understand chaotic or 
fractal processes. Take a forest for example, even in all its complexity and 
diversity, it is still governed by very simple and basic laws namely the laws 
of nature. By mimicking some of these laws at an appropriate level, such as 
shape level, programmers can create forests that to a very large extent looks 
like real forests:   http://www.speedtree.com/. A generator such as speedtree 
could generate entire forests of miles and miles of trees, with no single two 
trees looking the same. Even though the lines of code producing the trees are 
pretty simple, the outcome in creativity and originality is vast. 

  The same thing applies to a human mind. Even though the output of a human 
mind is amazingly diverse and creative, its program is still goverened by the 
basic laws of nature, and the DNA program. What AGI designers tries to do is to 
is to mimic this process. 

  The concepts of program and determinism are pretty well established within 
the scientific community, please do not try to redefine them like you do. It 
just creates a lot of confusion. I think what you really want to use is the 
concept of adaptability, or maybe you could say you want an AGI system that is 
programmed in an indirect way (meaning that the program instructions are very 
far away from what the system actually does). But please do not say things like 
we should write AGI systems that are not programmed. It hurts my ears/eyes. 

  /Robert Wensman


   
  2008/1/7, Mike Tintner [EMAIL PROTECTED]: 
Well we (Penrose  co) are all headed in roughly the same direction, but
we're taking different routes. 

If you really want the discussion to continue, I think you have to put out
something of your own approach here to spontaneous creativity (your terms)
as requested.

Yes, I still see the mind as following instructions a la briefing, but 
only odd 

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Robert Wensman
Mike,

Let me clarify further. What me and other computer scientists mean by
program, is probably something like *A formal and non-ambigous description
of a deterministic system that operates over time*. Thus, if you can
describe something in nature with enough detail, your description is a
program. As another example, if you write a book that describes the human
mind formally in enough detail, that book in itself would become a program.

So when you say that we cannot write a program that is creative on the same
level as humans, you basically state that it would be impossible to describe
the human mind in a detailed enough way. This is certainly bogus, as this
could be done theoretically by simply scanning and recording the state and
connections of every neuron in a human mind. Another way to put it, is that
your suggestion implies that we could never *understand *the human mind on a
fine enough level, which is pretty upsetting and certainly not
revolutionary.

What computers have or have not done up until this point is completely
besides the question, if we discuss the definition of program.

Yes, enough powerful AGI would be revolutionary, but they would still be
programs. What you is suggesting is equivalent to asking a painter to paint
a revolutionary painting, without using paint. What should he do, stare
intensely at the canvas until what happens? He could try to cheat, using
dirt, or mud to paint. But most people would then just say he invented
another kind of paint, namely the dirt paint, or the mud paint. It is just
impossible to paint a painting without paint (unless your painting is
intended to look the same as the empty canvas). Painting paintings without
paint is not a radical idea, it is just plain futile or incorrect, depending
on perspective.

Why this topic is frustrating, is because you are roughly right in one
aspect. Yes, computers and AI systems up until this point has been
programmed in a much too direct way, where the connection between
programmers lines of code, and the systems actions has been too close. E.g.
there is a line of code saying *if(handIsHot()) moveHand(),* and where the
robot system moves its hand when it becomes hot. But this is what we here
call narrow AI and what we all here try to distance ourselves from. From
what I can tell, Novamente is for example miles and miles and miles away
from this kind of programming. In contrast, systems like Novamente studies
input and builds and relates concepts to abstract goals and later form
actions using different kinds of subtle methods. But a system like that is *
complex* and you cannot expect Ben Goertzel to blurt out all this complexity
in an email on this mailing list. You have to study the design in detail if
you are interested in it. But the bottom line is, it is still programming in
any way you choose to look at it (unless you want to use the word
programming in some way that no other person on earth is using it, but in
that case, be prepared to feel alone).

You should focus on HOW we could make programs creative, rather loosing
yourself in a strange quest to redefine well established terminology. It is
completley besides the point.

/Robert Wensman


2008/1/7, Mike Tintner [EMAIL PROTECTED]:

  Robert,

 Look, the basic reality is that computers have NOT yet been creative in
 any significant way, and have NOT yet achieved AGI - general intelligence, -
 or indeed any significant rulebreaking adaptivity; (If you disagree, please
 provide examples. Ben keeps claiming/implying he's solved them or made
 significant advances, but when pressed never provides any indication of
 how).

 These are completely unsolved problems. Major creative problems.

 And I would suggest you have to be prepared for the solutions to be
 revolutionary and groundshaking.

 If you are truly serious about solving these problems, I suggest, you
 should prepared to be hurt - you should be ready to consider truly radical
 ideas - for the ground on which you stand to be questioned - and be
 seriously shaken up. You should WELCOME any and all of your assumptions
 being questioned. Even if, let's say, what I or someone else suggests is in
 the end nutty, drastic ideas are good for you to contemplate at least for a
 while.

 Having said all this, I accept that what I have been saying offends this
 community -  I wasn't trying originally to push it, I got dragged into some
 of that last discussion.by Ben. And I also accept that most of you are not
 interested in going for the revolutionary,  from whatever source. And  I
 shall try to restrict my comments unless someone wishes to engage with me -
 although BTW I am ever more confident of my broad philosophical/
 psychological position - the mind really doesn't work that way.

 I may possibly make one last related post in the not too distant future
 about the nature of problems, and which are/aren't suitable for programs -
 but just ignore it.



  Mike Tinter,

 If you really do not think that digital 

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Robert Wensman
Mike,

To put my question in another way. Would you like to understand
intelligence? Understand it to such a degree, that you can give a detailed
and non-ambiguous description of how an intelligent system operates over
time? Well, if you do want that, then you want -using standard
terminology- to create an intelligent program.

Why we get upset is because we feel you basically say I don't want to
understand intelligence alternatively intelligence can never be clearly
understood. You have to understand how computer scientists use the word
program to understand how we perceive your statements. From our
perspective, your position is not revolutionary, just depressing.

/Robert Wensman




2008/1/7, Mike Tintner [EMAIL PROTECTED]:

  Robert,

 Look, the basic reality is that computers have NOT yet been creative in
 any significant way, and have NOT yet achieved AGI - general intelligence, -
 or indeed any significant rulebreaking adaptivity; (If you disagree, please
 provide examples. Ben keeps claiming/implying he's solved them or made
 significant advances, but when pressed never provides any indication of
 how).

 These are completely unsolved problems. Major creative problems.

 And I would suggest you have to be prepared for the solutions to be
 revolutionary and groundshaking.

 If you are truly serious about solving these problems, I suggest, you
 should prepared to be hurt - you should be ready to consider truly radical
 ideas - for the ground on which you stand to be questioned - and be
 seriously shaken up. You should WELCOME any and all of your assumptions
 being questioned. Even if, let's say, what I or someone else suggests is in
 the end nutty, drastic ideas are good for you to contemplate at least for a
 while.

 Having said all this, I accept that what I have been saying offends this
 community -  I wasn't trying originally to push it, I got dragged into some
 of that last discussion.by Ben. And I also accept that most of you are not
 interested in going for the revolutionary,  from whatever source. And  I
 shall try to restrict my comments unless someone wishes to engage with me -
 although BTW I am ever more confident of my broad philosophical/
 psychological position - the mind really doesn't work that way.

 I may possibly make one last related post in the not too distant future
 about the nature of problems, and which are/aren't suitable for programs -
 but just ignore it.



  Mike Tinter,

 If you really do not think that digital computers can be creative by
 definition, I do not understand why you would like to join a mailing list
 with AGI researchers? Computers operate by using software, thus, they need
 to be programmed. It just seems to me that you do not understand what the
 word program means. Even if you use use a computer that do not need to be
 loaded with a program, guess what, such a computer could be considered to
 have an initial program.

 The very determinism of the universe implicates that everything runs
 according to a program, including your ramblings here about creativity. I
 have to ask you a question, do you think the universe and everything in it
 runs according to deterministic laws of nature? Do you accept that you are a
 part of this deterministic reality? Well, in that case Ive got news for you,
 you are a program also! As evidence I would present your DNA, a program
 encoded and stored in molecular structures.

 Have you ever heard of computational equivalence? Do you know what it
 means?

 Also, I feel annoyed that you compare the Novamente architecture with
 something that just takes instructions, like do this, do that, then do
 this etc. It seems you need to spend greater effort in studying this
 architecture, for example by reading The Hidden Pattern.

 I feel you are in great need of widening your mind to understand chaotic
 or fractal processes. Take a forest for example, even in all its complexity
 and diversity, it is still governed by very simple and basic laws namely the
 laws of nature. By mimicking some of these laws at an appropriate level,
 such as shape level, programmers can create forests that to a very large
 extent looks like real forests:   http://www.speedtree.com/. A generator
 such as speedtree could generate entire forests of miles and miles of trees,
 with no single two trees looking the same. Even though the lines of code
 producing the trees are pretty simple, the outcome in creativity and
 originality is vast.

 The same thing applies to a human mind. Even though the output of a human
 mind is amazingly diverse and creative, its program is still goverened by
 the basic laws of nature, and the DNA program. What AGI designers tries to
 do is to is to mimic this process.

 The concepts of program and determinism are pretty well established within
 the scientific community, please do not try to redefine them like you do. It
 just creates a lot of confusion. I think what you really want to use is the
 concept of adaptability, or maybe 

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread David Butler
Would two AGI's with the same initial learning program, same hardware  
in a controlled environment (same access to a specific learning base- 
something like an encyclopedia) learn at different rates and excel in  
different tasks?




Mike,

To put my question in another way. Would you like to understand  
intelligence? Understand it to such a degree, that you can give a  
detailed and non-ambiguous description of how an intelligent system  
operates over time? Well, if you do want that, then you want -using  
standard terminology- to create an intelligent program.


Why we get upset is because we feel you basically say I don't want  
to understand intelligence alternatively intelligence can never  
be clearly understood. You have to understand how computer  
scientists use the word program to understand how we perceive  
your statements. From our perspective, your position is not  
revolutionary, just depressing.


/Robert Wensman




2008/1/7, Mike Tintner [EMAIL PROTECTED]:
Robert,

Look, the basic reality is that computers have NOT yet been  
creative in any significant way, and have NOT yet achieved AGI -  
general intelligence, - or indeed any significant rulebreaking  
adaptivity; (If you disagree, please provide examples. Ben keeps  
claiming/implying he's solved them or made significant advances,  
but when pressed never provides any indication of how).


These are completely unsolved problems. Major creative problems.

And I would suggest you have to be prepared for the solutions to be  
revolutionary and groundshaking.


If you are truly serious about solving these problems, I suggest,  
you should prepared to be hurt - you should be ready to consider  
truly radical ideas - for the ground on which you stand to be  
questioned - and be seriously shaken up. You should WELCOME any and  
all of your assumptions being questioned. Even if, let's say, what  
I or someone else suggests is in the end nutty, drastic ideas are  
good for you to contemplate at least for a while.


Having said all this, I accept that what I have been saying offends  
this community -  I wasn't trying originally to push it, I got  
dragged into some of that last discussion.by Ben. And I also accept  
that most of you are not interested in going for the  
revolutionary,  from whatever source. And  I shall try to restrict  
my comments unless someone wishes to engage with me - although BTW  
I am ever more confident of my broad philosophical/ psychological  
position - the mind really doesn't work that way.


I may possibly make one last related post in the not too distant  
future about the nature of problems, and which are/aren't suitable  
for programs - but just ignore it.



Mike Tinter,

If you really do not think that digital computers can be creative  
by definition, I do not understand why you would like to join a  
mailing list with AGI researchers? Computers operate by using  
software, thus, they need to be programmed. It just seems to me  
that you do not understand what the word program means. Even if  
you use use a computer that do not need to be loaded with a  
program, guess what, such a computer could be considered to have an  
initial program.


The very determinism of the universe implicates that everything  
runs according to a program, including your ramblings here about  
creativity. I have to ask you a question, do you think the universe  
and everything in it runs according to deterministic laws of  
nature? Do you accept that you are a part of this deterministic  
reality? Well, in that case Ive got news for you, you are a program  
also! As evidence I would present your DNA, a program encoded and  
stored in molecular structures.


Have you ever heard of computational equivalence? Do you know what  
it means?


Also, I feel annoyed that you compare the Novamente architecture  
with something that just takes instructions, like do this, do  
that, then do this etc. It seems you need to spend greater effort  
in studying this architecture, for example by reading The Hidden  
Pattern.


I feel you are in great need of widening your mind to understand  
chaotic or fractal processes. Take a forest for example, even in  
all its complexity and diversity, it is still governed by very  
simple and basic laws namely the laws of nature. By mimicking some  
of these laws at an appropriate level, such as shape level,  
programmers can create forests that to a very large extent looks  
like real forests:   http://www.speedtree.com/. A generator such as  
speedtree could generate entire forests of miles and miles of  
trees, with no single two trees looking the same. Even though the  
lines of code producing the trees are pretty simple, the outcome in  
creativity and originality is vast.


The same thing applies to a human mind. Even though the output of a  
human mind is amazingly diverse and creative, its program is still  
goverened by the basic laws of nature, and the DNA program. What  
AGI designers tries 

RE: Can Computers Be Creative? [WAS Re: [agi] A Simple Mathematical Test of Cog Sci.]

2008-01-07 Thread Ed Porter
simply executes algorithms, as a billiard table where billiard balls act as
message carriers and their interactions act as logical decisions. He argues
against the viewpoint that the rational processes of the human mind are
completely algorithmic http://en.wikipedia.org/wiki/Algorithm  and can
thus be duplicated by a sufficiently complex computer -- this is in contrast
to views, e.g., Biological Naturalism
http://en.wikipedia.org/wiki/Biological_Naturalism , that human behavior
but not consciousness might be simulated. This is based on claims that human
consciousness transcends formal logic
http://en.wikipedia.org/wiki/Formal_logic  systems because things such as
the insolubility of the halting problem
http://en.wikipedia.org/wiki/Halting_problem  and Gödel's incompleteness
theorem http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorem
restrict an algorithmically based logic from traits such as mathematical
insight. These claims were originally made by the philosopher John Lucas
http://en.wikipedia.org/wiki/John_Lucas_%28philosopher%29  of Merton
College http://en.wikipedia.org/wiki/Merton_College%2C_Oxford , Oxford
http://en.wikipedia.org/wiki/University_of_Oxford .
In 1994 http://en.wikipedia.org/wiki/1994 , Penrose followed up The
Emperor's New Mind with Shadows of the Mind
http://en.wikipedia.org/wiki/Shadows_of_the_Mind  and in 1997
http://en.wikipedia.org/wiki/1997  with The Large, the Small and the Human
Mind
http://en.wikipedia.org/w/index.php?title=The_Large%2C_the_Small_and_the_Hu
man_Mindaction=edit , further updating and expanding his theories.
Penrose's views on the human thought http://en.wikipedia.org/wiki/Thought
process are not widely accepted in scientific circles. According to Marvin
Minsky http://en.wikipedia.org/wiki/Marvin_Minsky , because people can
construe false ideas to be factual, the process of thinking is not limited
to formal logic. Furthermore, he says that AI
http://en.wikipedia.org/wiki/Artificial_intelligence  programs can also
conclude that false statements are true, so error is not unique to humans.
Penrose and Stuart Hameroff http://en.wikipedia.org/wiki/Stuart_Hameroff
have constructed a theory in which human consciousness
http://en.wikipedia.org/wiki/Consciousness  is the result of quantum
gravity effects in microtubules http://en.wikipedia.org/wiki/Microtubule ,
which they dubbed Orch-OR http://en.wikipedia.org/wiki/Orch-OR
(orchestrated object reduction). But Max Tegmark
http://en.wikipedia.org/wiki/Max_Tegmark , in a paper in Physical Review
E, calculated that the time scale of neuron firing and excitations in
microtubules is slower than the decoherence
http://en.wikipedia.org/wiki/Quantum_decoherence  time by a factor of at
least 10,000,000,000. The reception of the paper is summed up by this
statement in his support: Physicists outside the fray, such as IBM's John
Smolin http://en.wikipedia.org/w/index.php?title=John_Smolinaction=edit ,
say the calculations confirm what they had suspected all along. 'We're not
working with a brain that's near absolute zero. It's reasonably unlikely
that the brain evolved quantum behavior', he says. The Tegmark paper has
been widely cited by critics of the Penrose-Hameroff proposal. It has been
claimed by Hameroff to be based on a number of incorrect assumptions (see
linked paper below from Hameroff, Hagan
http://en.wikipedia.org/w/index.php?title=Scott_Haganaction=edit  and
Tuszyński
http://en.wikipedia.org/w/index.php?title=Jack_Tuszy%C5%84skiaction=edit
), but Tegmark in turn has argued that the critique is invalid (see
rejoinder link below). In particular, Hameroff points out the peculiarity
that Tegmark's formula for the decoherence time includes a factor of  [the
square root of temperature, if you can't see the graphic] in the numerator,
meaning that higher temperatures would lead to longer decoherence times.
Tegmark's rejoinder keeps the factor of [the square root of temperature, if
you can't see the graphic]for the decoherence time.




-Original Message-
From: Richard Loosemore [mailto:[EMAIL PROTECTED] 
Sent: Monday, January 07, 2008 10:09 AM
To: agi@v2.listbox.com
Subject: Can Computers Be Creative? [WAS Re: [agi] A Simple Mathematical
Test of Cog Sci.]

Mike,

This discussion is just another a repetition of a common fallacy, namely 
that computers cannot be creative (or flexible, adaptive, original etc.) 
because they are programmed.

The fallacy can be illustrated by considering the following set of 
situations.

1) If I tell a child how to solve a calculus problem by giving them 
explicit steps to manipulate the symbols, they are not really doing the 
problem, they are just blindly doing what I am telling them to.  The 
child is just following a program written by me.

2) If I tell a child some general rules for solving calculus problems, 
but let the child figure out which rules map onto the particular problem 
at hand, then the child is now doing some work, but still they don't 
understand calculus

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Benjamin Goertzel
On Jan 7, 2008 12:08 PM, David Butler [EMAIL PROTECTED] wrote:
 Would two AGI's with the same initial learning program, same hardware in a
 controlled environment (same access to a specific learning base-something
 like an encyclopedia) learn at different rates and excel in different tasks?

Yes ...

Even in the extreme case of identical external stimuli, two AGI systems could
evolve slightly differently due to consequences of rounding error.

However, if the AGI systems were built carefully enough (so as not to
be susceptible
to rounding error or other related phenomena) it could be made so that
with totally
identical environments they were totally identical in behavior, so
long as no hardware
failures occurred.

(I note though that minor hardware failures like small defects in RAM
or disk could
always intervene and play the same role as roundoff error, potentially
setting the
two AGIs with identical code and identical environmental stimuli on different
courses.)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82647670-987d16


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread David Butler
How would an AGI choose which things to learn first if given enough  
data so that it would have to make a choice? If two AGI's (again-same  
hardware, learning programs and controlled environment) were given  
the same data would they make different choices?


On Jan 7, 2008, at 11:15 AM, Benjamin Goertzel wrote:


On Jan 7, 2008 12:08 PM, David Butler [EMAIL PROTECTED] wrote:
Would two AGI's with the same initial learning program, same  
hardware in a
controlled environment (same access to a specific learning base- 
something
like an encyclopedia) learn at different rates and excel in  
different tasks?


Yes ...

Even in the extreme case of identical external stimuli, two AGI  
systems could

evolve slightly differently due to consequences of rounding error.

However, if the AGI systems were built carefully enough (so as not to
be susceptible
to rounding error or other related phenomena) it could be made so that
with totally
identical environments they were totally identical in behavior, so
long as no hardware
failures occurred.

(I note though that minor hardware failures like small defects in RAM
or disk could
always intervene and play the same role as roundoff error, potentially
setting the
two AGIs with identical code and identical environmental stimuli on  
different

courses.)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/? 





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82651064-e17c10


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread Robert Wensman
2008/1/7, David Butler [EMAIL PROTECTED]:

 How would an AGI choose which things to learn first if given enough
 data so that it would have to make a choice?


This is a simple question that demands a complex answer. It is like asking
How can a commercial airliner fly across the Atlantic?. Well, in that case
you would have to study aerodynamics, mechanics, physics, thermodynamics,
computer science, electronics, metallurgy and chemistry for several years,
and in the end you would discover that one single person cannot understand
such a complex machine in its entire detail. True enough, one person could
understand all basic principles for such a system, but explaining them would
hardly suffice as evidence that it would actually work in practice.

If you lived in the medieval times, and someone asked you how is it
possible to cross the Atlantic in a flying machine carrying several hundred
passengers?, what would you answer? Even if you had the expertise knowledge
it would be very hard to explain thoroughly, just because the machine is so
complex and you would have to explain every technology from the
beginning. Where would you start? Maybe some person with less insight would
interrupt you after a few sentences and say well, clearly you cannot
present evidence that it will ever work and make fun of the idea, but how
does insufficient time/space to explain a complex system prove that
something is not possible?

The same goes for AGI, for example when someone asks how can we create a
program that is creative and can choose what to learn?. In response to this
it is possible to present a lot of different principles, such as
adaptability, genetic programming, quelling of combinatorial explosions etc.
But will the principles work in practice when put together? Well, at this
stage we simply cannot tell. *So every person just has to make a choice in
whether to believe it is possible, or whether to believe it is not possible.
*But just because no AGI researcher can answer that question in a few words.
how can we create a programs that is creative and can choose what to
learn, it doesn't mean it is not possible when all these principles come
together. We just have to wait and see.

To those who do not believe: Please just go away from this mailing list and
do not interfere with the work here. Don't demand proof that it would work,
because when we have such proof, i.e. a finished AGI system, we wont need to
defend our hypothesises anyway.


If two AGI's (again-same
 hardware, learning programs and controlled environment) were given
 the same data would they make different choices?


Is a deterministic system deterministic? I do not understand what you are
getting at. Why this question? I think Benjamin answered this question
pretty thoroughly already.

/Robert Wensman

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82686766-4e2400

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-07 Thread David Butler

Robert,

Thank you for your time.  I am not a scientist nor do I have an  
opinion or agenda on weather a successful AGI can be built.  I am  
just really curious and exited about the prospects.



On Jan 7, 2008, at 12:39 PM, Robert Wensman wrote:




2008/1/7, David Butler [EMAIL PROTECTED]:
How would an AGI choose which things to learn first if given enough
data so that it would have to make a choice?

This is a simple question that demands a complex answer. It is like  
asking How can a commercial airliner fly across the Atlantic?.  
Well, in that case you would have to study aerodynamics, mechanics,  
physics, thermodynamics, computer science, electronics, metallurgy  
and chemistry for several years, and in the end you would discover  
that one single person cannot understand such a complex machine in  
its entire detail. True enough, one person could understand all  
basic principles for such a system, but explaining them would  
hardly suffice as evidence that it would actually work in practice.


If you lived in the medieval times, and someone asked you how is  
it possible to cross the Atlantic in a flying machine carrying  
several hundred passengers?, what would you answer? Even if you  
had the expertise knowledge it would be very hard to explain  
thoroughly, just because the machine is so complex and you would  
have to explain every technology from the beginning. Where would  
you start? Maybe some person with less insight would interrupt you  
after a few sentences and say well, clearly you cannot present  
evidence that it will ever work and make fun of the idea, but how  
does insufficient time/space to explain a complex system prove that  
something is not possible?


The same goes for AGI, for example when someone asks how can we  
create a program that is creative and can choose what to learn?.  
In response to this it is possible to present a lot of different  
principles, such as adaptability, genetic programming, quelling of  
combinatorial explosions etc. But will the principles work in  
practice when put together? Well, at this stage we simply cannot  
tell. So every person just has to make a choice in whether to  
believe it is possible, or whether to believe it is not possible.  
But just because no AGI researcher can answer that question in a  
few words. how can we create a programs that is creative and can  
choose what to learn, it doesn't mean it is not possible when all  
these principles come together. We just have to wait and see.


To those who do not believe: Please just go away from this mailing  
list and do not interfere with the work here. Don't demand proof  
that it would work, because when we have such proof, i.e. a  
finished AGI system, we wont need to defend our hypothesises anyway.



If two AGI's (again-same
hardware, learning programs and controlled environment) were given
the same data would they make different choices?

Is a deterministic system deterministic? I do not understand what  
you are getting at. Why this question? I think Benjamin answered  
this question pretty thoroughly already.


/Robert Wensman
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/? 



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82704532-0ec3b9

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Richard Loosemore


Mike,

You have mischaracterized cog sci.  It does not say the things you 
claim it does.


What you are actually trying to attack was a particular view of AI (not 
cog sci) in which everything is symbolic in a particular kind of way. 
 That stuff is just a straw man.


Cog sci in general encourages a wide range of different theories of 
cognition, and the one that you vaguely describe is easily part of teh 
cog sci mainstream.


Richard Loosemore



Mike Tintner wrote:

I think I've found a simple test of cog. sci.

I take the basic premise of cog. sci. to be that the human mind - and 
therefore its every activity, or sequence of action - is programmed. 
Eric Baum epitomises cog. sci.Baum proposes [in What Is Thought]  that 
underlying mind is a complex but compact program that corresponds to the 
underlying structure of the world..


As you know, I contend that that is absurd - that, yes, every human 
activity - having a conversation, writing a post, making love, doing a 
drawing etc - is massively subprogrammed, containing often v. large 
numbers of routines - but as a whole, each activity is a free 
composition. Those routines, along with isolated actions,  are more or 
less freely thrown together - freely associated . As a whole, our 
activities are more or less crazy walks - I use crazy to mean both 
structured and chaotic - and effectively self-contradictory.


(This has huge implications for AGI - you guys believe that an AGI must 
be programmed for its activities, I contend that free composition 
instead is essential for truly adaptive, general intelligence and is the 
basis of all animal and human activities).


So how to test cog sci? I contend that the proper, *ideal* test is to 
record humans' actual streams of thought about any problem - like, say, 
writing an essay - and even just a minute's worth will show that, 
actually, humans have major difficulties following anything like a 
joined-up, rational train of thought - or any stream that looks remotely 
like it could be programmed overall. (That includes more esoteric forms 
of programming like random kinds).  Actually, humans follow more or less 
roving, crazy streams of thought - not chaotic by any means, but not 
perfectly joined up either - more or less free-form, a bit like free 
verse - somewhat structured but only loosely).


I still think that this is the proper, essential approach to studying 
the connectedness, programmed or otherwise, of human thought. But it is 
obviously a complicated affair - even if one could record those streams 
of thought absolutely faithfully.


And science likes simple tests/ experiments -  the more mathematical and 
measurable the better.


So here's a simple mathematical test, which everyone can try.

Do an abstract line drawing.  (for let's say 30 secs. - on this 
particular site)


Here are a few of my spontaneous masterpieces:

http://www.imagination3.com/LaunchPage?aFileType=_nolivecachesessionID=message=room_email=[EMAIL PROTECTED]from_name=mike 
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105_194101926_970043768_gbrtranscript=_lscid= 
.


http://www.imagination3.com/LaunchPage?aFileType=_nolivecachesessionID=message=room_email=[EMAIL PROTECTED]from_name=mike 
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105_194033348_926554557_gbrtranscript=_lscid= 
.


http://www.imagination3.com/LaunchPage?aFileType=_nolivecachesessionID=message=room_email=[EMAIL PROTECTED]from_name=mike 
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105_193922629_715992016_gbrtranscript=_lscid= 
.


http://www.imagination3.com/LaunchPage?aFileType=_nolivecachesessionID=message=room_email=[EMAIL PROTECTED]from_name=mike 
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105_193734879_1708083161_gbrtranscript=_lscid= 
.


The beauty of this site is that it does indeed record the actual stream 
of thought/ drawing - and not just the end result. (It would be v. 
interesting to see many other people's tests).


Now you guys are mathematicians - I contend that those drawings are 
indeed crazy, spontaneous, free compositions - they have themes and 
patterns in parts and are by no means entirely random, but they are 
certainly not patterned or programmed overall either.  Can you find an 
overall pattern or program to any of them - let alone a program that 
underlies ALL of them? Or, if you prefer, can you find a suite of programs?


(I guess a more formal way of expressing the test is that on any given 
page, it is possible to draw an infinite number of line drawings which 
are a) structured  b) chaotic  c) crazy (mixtures of both) - and, in 
principle, programmed or non-programmed. And to assert that human 
activities are programmed is, in the final analysis, to assert that 
there is no such thing as a crazy set of lines. But please comment).


What this test shows, I believe, is the bleeding obvious - humans can 
and do produce truly spontaneous,crazy, nonprogrammed,ad hoc, unplanned 
sequences of action. Well, it should be 

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Richard Loosemore

David Butler wrote:
I would say that the best way to simulate human intelligence with 
diversity and creativity is to create not one AGI but many. The only way 
to insure diversity and natural selection like our own evolution is to 
simultaneously create multiple AGI's so that we have a better chance of 
the emergence of the best path for the evolution of friendly AGI.


I am new to this list. Is there anyone out there who has addressed this 
issue? We have many people who are very gifted with math and science who 
are in the forefront of AGI, but random creativity and seat of the 
pants intuition is a really big part of human evolution. If we create 
multiple AGI's we have a chance that all of our traits are developed (in 
the same way that we are genetically programed)  in some way to create a 
community of sorts that hopefully will be able to sustain our legacy of 
diversity and creative thought.


Dave Butler


Making one AGI is difficult, so really the friendliness problem and the 
question of how to make them creative (etc) already has to be confronted 
and solved before we create the first one.  Creating multiple AGIs would 
then be an afterthought, rather than a solution to those problems.


If, on the other hand, you are talking about the RD process that will 
go on during the creation of the first AGI, then I completely agree with 
you:  we need to experiment with a range of mechanisms in order to find 
out how they behave (and that is very much part of my own program of 
research).  But these will not be free-ranging AGIs that are allowed to 
evolve and interact in the real world.  That would be very different 
from simply allowing everyone and their motheer to build a different 
type of AGI, then letting them all interact and compete to see which is 
the best.




Richard Loosemore








On Jan 5, 2008, at 9:52 PM, Mike Tintner wrote:


I think I've found a simple test of cog. sci.

I take the basic premise of cog. sci. to be that the human mind - and 
therefore its every activity, or sequence of action - is programmed. 
Eric Baum epitomises cog. sci.Baum proposes [in What Is Thought]  
that underlying mind is a complex but compact program that corresponds 
to the underlying structure of the world..


As you know, I contend that that is absurd - that, yes, every human 
activity - having a conversation, writing a post, making love, doing a 
drawing etc - is massively subprogrammed, containing often v. large 
numbers of routines - but as a whole, each activity is a free 
composition. Those routines, along with isolated actions,  are more 
or less freely thrown together - freely associated . As a whole, our 
activities are more or less crazy walks - I use crazy to mean both 
structured and chaotic - and effectively self-contradictory.


(This has huge implications for AGI - you guys believe that an AGI 
must be programmed for its activities, I contend that free composition 
instead is essential for truly adaptive, general intelligence and is 
the basis of all animal and human activities).


So how to test cog sci? I contend that the proper, *ideal* test is to 
record humans' actual streams of thought about any problem - like, 
say, writing an essay - and even just a minute's worth will show that, 
actually, humans have major difficulties following anything like a 
joined-up, rational train of thought - or any stream that looks 
remotely like it could be programmed overall. (That includes more 
esoteric forms of programming like random kinds).  Actually, humans 
follow more or less roving, crazy streams of thought - not chaotic by 
any means, but not perfectly joined up either - more or less 
free-form, a bit like free verse - somewhat structured but only loosely).


I still think that this is the proper, essential approach to studying 
the connectedness, programmed or otherwise, of human thought. But it 
is obviously a complicated affair - even if one could record those 
streams of thought absolutely faithfully.


And science likes simple tests/ experiments -  the more mathematical 
and measurable the better.


So here's a simple mathematical test, which everyone can try.

Do an abstract line drawing.  (for let's say 30 secs. - on this 
particular site)


Here are a few of my spontaneous masterpieces:

http://www.imagination3.com/LaunchPage?aFileType=_nolivecachesessionID=message=room_email=[EMAIL PROTECTED]from_name=mike 
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105_194101926_970043768_gbrtranscript=_lscid= 
.


http://www.imagination3.com/LaunchPage?aFileType=_nolivecachesessionID=message=room_email=[EMAIL PROTECTED]from_name=mike 
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105_194033348_926554557_gbrtranscript=_lscid= 
.


http://www.imagination3.com/LaunchPage?aFileType=_nolivecachesessionID=message=room_email=[EMAIL PROTECTED]from_name=mike 
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105_193922629_715992016_gbrtranscript=_lscid= 
.



Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
On Jan 5, 2008 10:52 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 I think I've found a simple test of cog. sci.

 I take the basic premise of cog. sci. to be that the human mind - and
 therefore its every activity, or sequence of action - is programmed.

No.  This is one perspective taken by some cognitive scientists.  It does
not characterize the field.

 (This has huge implications for AGI - you guys believe that an AGI must be
 programmed for its activities, I contend that free composition instead is
 essential for truly adaptive, general intelligence and is the basis of all
 animal and human activities).

Spontaneous, creative self-organized activity is a key aspect of Novamente
and many other AGI designs.

 So how to test cog sci? I contend that the proper, *ideal* test is to record
 humans' actual streams of thought about any problem - like, say, writing an
 essay - and even just a minute's worth will show that, actually, humans have
 major difficulties following anything like a joined-up, rational train of
 thought - or any stream that looks remotely like it could be programmed
 overall.

A)
While introspection is certainly a valid and important tool for inspiring
work in AI and cog sci, it is not a test of anything.  There is much empirical
evidence showing that humans' introspections of their own cognitive
processes are highly partial and inaccurate.

For instance, if we were following the arithmetic algorithms that we think
we are, there is no way the timing of our responses when solving arithmetic
problems would come out the way they actually do.  (I don't have the references
for this work at hand, but I saw it years ago in the Journal of Math Psych I
believe.)

B)
Whether something looks like it's following a simple set of rules
doesn't mean much.  Chaotic underlying dynamics can give rise to
high-level orderly behavior; and simple systems of rules can give rise
to apparently disorderly, incomprehensibly complex behaviors.  Cf
the whole field of complex-systems dynamics.


-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82365583-966081


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
I don't really understand what you mean by programmed ... nor by creative

You say that, according to your definitions, a GA is programmed and
ergo cannot be creative...

How about, for instance, a computer simulation of a human brain?  That
would be operated via program code, hence it would be programmed --
so would you consider it intrinsically noncreative?

Could you please define your terms more clearly?

thx
ben

On Jan 6, 2008 1:21 PM, Mike Tintner [EMAIL PROTECTED] wrote:

 MT: This has huge implications for AGI - you guys believe that an AGI must
 be
  programmed for its activities, I contend that free composition instead is
  essential for truly adaptive, general intelligence and is the basis of
  all
  animal and human activities).
 
 Ben:  Spontaneous, creative self-organized activity is a key aspect of
 Novamente
  and many other AGI designs.

 Ben,

 You are saying that your pet presumably works at times in a non-programmed
 way - spontaneously and creatively? Can you explain briefly the
 computational principle(s) behind this, and give an example of where it's
 applied, (exploration of an environment, say)? This strikes me as an
 extremely significant, even revolutionary claim to make, and it would be a
 pity if, as with your analogy claim, you simply throw it out again without
 any explanation.

 And I'm wondering whether you are perhaps confused about this, (or I have
 confused you) -  in the way you definitely are below. Genetic algorithms,
 for example, and suchlike classify as programmed and neither truly
 spontaneous nor creative.

 Note that Baum asked me a while back what  test I could provide that humans
 engage in free thinking.  He, quite rightly, thought it a scientifically
 significant claim to make, that demanded scientific substantiation.

 My test is not a test, I stress though, of  free will. But have you changed
 your mind about this? It's hard though not a complete contradiction  to
 believe in a mind being spontaneously creative and yet not having freedom of
 decision.

 MT:  I contend that the proper, *ideal* test is to record
  humans' actual streams of thought about any problem
 
 Ben:  While introspection is certainly a valid and important tool for
 inspiring
  work in AI and cog sci, it is not a test of anything.  

 Ben,

 This is a really major - and very widespread - confusion.  A recording of
 streams of thought is what it says - a direct or recreated recording of a
 person's actual thoughts. So, if I remember right, some form of that NASA
 recording of subvocalisation when someone is immediately thinking about a
 problem, would classify as a record of their thoughts.

 Introspection is very different - it is a report of thoughts, remembered at
 a later, often much later time.

 A record(ing) might be me saying I want to kill you, you bastard  in an
 internal daydream. Introspection might be me reporting later: I got very
 angry with him in my mind/ daydream. Huge difference. An awful lot of
 scientists think, quite mistakenly, that the latter is the best science can
 possibly hope to do.

 Verbal protocols - getting people to think aloud about problems - are a sort
 of halfway house (or better).





 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82398434-a3e5d5


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread a

Benjamin Goertzel wrote:

I don't really understand what you mean by programmed ... nor by creative

You say that, according to your definitions, a GA is programmed and
ergo cannot be creative...

How about, for instance, a computer simulation of a human brain?  That
would be operated via program code, hence it would be programmed --
so would you consider it intrinsically noncreative?

Could you please define your terms more clearly?

thx
ben
  
Creativity is a byproduct of analogical reasoning, or abstraction. It 
has nothing to do with symbols or genetic algorithms! GA is too 
computationally complex to generate creative solutions.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82421095-927e7e


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Mike Dougherty
On Jan 6, 2008 3:07 PM, a [EMAIL PROTECTED] wrote:
 Creativity is a byproduct of analogical reasoning, or abstraction. It
 has nothing to do with symbols or genetic algorithms! GA is too
 computationally complex to generate creative solutions.

care to explain what sounds so absolute as to certainly be wrong?

Is the brain too compurationally complex to generate creative
solutions?  (scare quotes persisted)

Or are you suggesting that GA is more computationally complex than your brain?

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82423813-676f3c


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Mike Tintner

Ben,

Sounds like you may have missed the whole point of the test - though I mean 
no negative comment by that - it's all a question of communication.


A *program* is a prior series or set of instructions that shapes and 
determines an agent's sequence of actions. A precise itinerary for a 
journey. Even if the programmer doesn't have a full but only a very partial 
vision of that eventual sequence or itinerary.  (The agent of course can be 
either the human mind or a computer).


If the mind works by *free composition,* then it works v. differently - 
though this is an idea that has still to be fleshed out, and could take many 
forms. The first crucial difference is that there is NO PRIOR SERIES OR SET 
OF INSTRUCTIONS - saves a helluva lot on both space and programming work. 
Rather the mind works principally by free association - making up that 
sequence of actions/ journey AS IT GOES ALONG. So my very crude idea of this 
is you start, say, with a feeling of hunger, which = go get food.  And 
immediately you go to the fridge. But only then, when the right food isn't 
there, do you think: in what other place could food be. And you may end up 
going various places, and/or asking various people, and/or consulting 
various sources of information, and/or doing things that you don't normally 
do like actually cooking/preparing various dishes, or looking under sofas or 
going to a restaurant- but there was no initial program in your brain 
for the actual journey you undertake, which is simply thrown together ad hoc 
and can take many different courses. Rather like an actual Freudian chain of 
free word associations, where there cannot possibly be a prior program (or 
would anyone disagree?)


(Any given journey, though, may  involve many well-established routines).

As opposed to an initial AI-style program with complete set of instructions, 
I suggest, the mind in undertaking activities,  has normally only the 
roughest of briefs outlining a goal, together with a rough, abstract and 
very, even extremely, incomplete sketch of the journey to be undertaken.


A program is essentially a detailed blueprint for a house. A free 
composition is a very rough sketchy outline to begin with, that is freely 
filled in as you go along . Evolution and development seem to work more on 
the latter principle - remember Dawkins' idea of them  as like an airplane 
built in mid-flight - though our physical development, while definitely 
having considerable degrees of freedom as to possible physiques, is vastly 
more constrained than our physical and mental activities.


None of the many activities of writing a program that you have undertaken - 
as distinct from the programs themselves - was, I suggest, remotely 
preprogrammed itself. Writing a program like any creative activity - writing 
a story/musical piece/ drawing a picture or producing a design - is a free 
composition. A crazy walk.


Genetic algorithms are indeed programs and function v. differently from 
human creativity. They proceed along predefined lines. Nothing crazy about 
them.  If they produce surprising results, it is only because the programmer 
didn't have the capacity to think through the consequences of his 
instructions.


Now note here - heavily underlined several times - I have only gone into 
free composition, in order to give you something more or less vivid to 
contrast with the idea of a program. But the point of my test is NOT to 
elucidate the idea of free composition- I don't have to do that - it is to 
test  hopefully destroy the idea of the mind being driven by neat prior 
sets of instructions - even pace Richard or genetic algorithms,  v. complex 
sets of instructions.


Does that make the program/free composition distinction -  the point of the 
test - clearer, regardless of how you may agree/disagree?




Ben: I don't really understand what you mean by programmed ... nor by 
creative


You say that, according to your definitions, a GA is programmed and
ergo cannot be creative...

How about, for instance, a computer simulation of a human brain?  That
would be operated via program code, hence it would be programmed --
so would you consider it intrinsically noncreative?

Could you please define your terms more clearly?

thx
ben

On Jan 6, 2008 1:21 PM, Mike Tintner [EMAIL PROTECTED] wrote:


MT: This has huge implications for AGI - you guys believe that an AGI 
must

be
 programmed for its activities, I contend that free composition instead 
 is

 essential for truly adaptive, general intelligence and is the basis of
 all
 animal and human activities).

Ben:  Spontaneous, creative self-organized activity is a key aspect of
Novamente
 and many other AGI designs.

Ben,

You are saying that your pet presumably works at times in a 
non-programmed

way - spontaneously and creatively? Can you explain briefly the
computational principle(s) behind this, and give an example of where it's
applied, (exploration of an environment, say)? This strikes 

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
On Jan 6, 2008 4:00 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Ben,

 Sounds like you may have missed the whole point of the test - though I mean
 no negative comment by that - it's all a question of communication.

 A *program* is a prior series or set of instructions that shapes and
 determines an agent's sequence of actions. A precise itinerary for a
 journey. Even if the programmer doesn't have a full but only a very partial
 vision of that eventual sequence or itinerary.  (The agent of course can be
 either the human mind or a computer).

OK, then any AI that is implemented in computer software is by your
definition a programmed AI.  Whether it is based on GA's, neural nets,
logical theorem-proving or whatever.

So, is your argument that digital computer programs can never be creative,
since you have asserted that programmed AI's can never be creative?

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82448475-4978a0


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread a

Benjamin Goertzel wrote:

So, is your argument that digital computer programs can never be creative,
since you have asserted that programmed AI's can never be creative

Hard-wired AI (such as KB, NLP, symbol systems) cannot be creative.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82459047-c3be62


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
Mike,

 The short answer is that I don't believe that computer *programs* can be
 creative in the hard sense, because they presuppose a line of enquiry, a
 predetermined approach to a problem -
...
 But I see no reason why computers couldn't be briefed rather than
 programmed, and freely associate across domains rather than working along
 predetermined lines.

But the computer that is being briefed is still running some software program,
hence is still programmed -- and its responses are still determined by
that program (in conjunction w/ the environment, which however it perceives
only thru a digital bit stream)

 I don't however believe that purely *digital* computers are capable of all
 the literally imaginative powers (as already discussed elsewhere) that are
 also necessary for true creativity and general intelligence.

I don't know how you define a literally imaginative power.

So, it seems like you are saying

-- digital computer software can never truly be creative or possess general
intelligence

Is this your assertion?

It is not an original one of course: Penrose, Dreyfus and many others have
argued the same point.   The latter paragraph of yours I've quoted could
be straight out of The Emeperor's New Mind by Penrose.

Penrose then notes that quantum computers can compute only the same
stuff that digital computers can; so he posits that general intelligence is
possible only for quantum gravity computers, which is what he posits
the brain is.

I think Penrose is most probably wrong, but at least I understand what
he is saying...

I'm just trying to understand what your perspective actually is...

thx
Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82464788-e73a96


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Mike Tintner
Well we (Penrose  co) are all headed in roughly the same direction, but 
we're taking different routes.


If you really want the discussion to continue, I think you have to put out 
something of your own approach here to spontaneous creativity (your terms) 
as requested.


Yes, I still see the mind as following instructions a la briefing, but 
only odd ones, not a whole rigid set of them., a la programs. And the 
instructions are open-ended and non-deterministically open to 
interpretation, just as my briefing/instruction to you - Ben go and get me 
something nice for supper - is. Oh, and the instructions that drive us, 
i.e. emotions, are always conflicting, e.g [Ben:] I might like to.. but do 
I really want to get that bastard anything for supper? Or have the time to, 
when I am on the very verge of creating my stupendous AGI?


Listen, I can go on and on - the big initial deal is the claim that the mind 
isn't -  no successful AGI can be - driven by a program, or thoroughgoing 
SERIES/SET of instructions - if it is to solve even minimal general 
adaptive, let alone hard creative problems. No structured approach will work 
for an ill-structured problem.


You must give some indication of how you think a program CAN be generally 
adaptive/ creative - or, I would argue, squares (programs are so square, 
man) can be circled :).



Mike,


The short answer is that I don't believe that computer *programs* can be
creative in the hard sense, because they presuppose a line of enquiry, a
predetermined approach to a problem -

...

But I see no reason why computers couldn't be briefed rather than
programmed, and freely associate across domains rather than working along
predetermined lines.


But the computer that is being briefed is still running some software 
program,

hence is still programmed -- and its responses are still determined by
that program (in conjunction w/ the environment, which however it 
perceives

only thru a digital bit stream)

I don't however believe that purely *digital* computers are capable of 
all
the literally imaginative powers (as already discussed elsewhere) that 
are

also necessary for true creativity and general intelligence.


I don't know how you define a literally imaginative power.

So, it seems like you are saying

-- digital computer software can never truly be creative or possess 
general

intelligence

Is this your assertion?

It is not an original one of course: Penrose, Dreyfus and many others have
argued the same point.   The latter paragraph of yours I've quoted could
be straight out of The Emeperor's New Mind by Penrose.

Penrose then notes that quantum computers can compute only the same
stuff that digital computers can; so he posits that general intelligence 
is

possible only for quantum gravity computers, which is what he posits
the brain is.

I think Penrose is most probably wrong, but at least I understand what
he is saying...

I'm just trying to understand what your perspective actually is...



- Release Date: 1/5/2008 11:46 AM






-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82482150-8495ed


Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-06 Thread Benjamin Goertzel
If you believe in principle that no digital computer program can ever
be creative, then there's no point in me or anyone else rambling on at
length about their own particular approach to digital-computer-program
creativity...

One question I have is whether you would be convinced that digital
programs ARE capable of true creativity, by any possible actual achievements
of digital computer programs...

If a digital computer program made a great painting, wrote a great novel,
proved a great theorem, patented dozens of innovative inventions, etc. --
would you be willing to admit it's creative, or would you argue that due to
its digital nature, it must have achieved these things in a noncreative
way?

Ben

On Jan 6, 2008 6:58 PM, Mike Tintner [EMAIL PROTECTED] wrote:
 Well we (Penrose  co) are all headed in roughly the same direction, but
 we're taking different routes.

 If you really want the discussion to continue, I think you have to put out
 something of your own approach here to spontaneous creativity (your terms)
 as requested.

 Yes, I still see the mind as following instructions a la briefing, but
 only odd ones, not a whole rigid set of them., a la programs. And the
 instructions are open-ended and non-deterministically open to
 interpretation, just as my briefing/instruction to you - Ben go and get me
 something nice for supper - is. Oh, and the instructions that drive us,
 i.e. emotions, are always conflicting, e.g [Ben:] I might like to.. but do
 I really want to get that bastard anything for supper? Or have the time to,
 when I am on the very verge of creating my stupendous AGI?

 Listen, I can go on and on - the big initial deal is the claim that the mind
 isn't -  no successful AGI can be - driven by a program, or thoroughgoing
 SERIES/SET of instructions - if it is to solve even minimal general
 adaptive, let alone hard creative problems. No structured approach will work
 for an ill-structured problem.

 You must give some indication of how you think a program CAN be generally
 adaptive/ creative - or, I would argue, squares (programs are so square,
 man) can be circled :).


  Mike,
 
  The short answer is that I don't believe that computer *programs* can be
  creative in the hard sense, because they presuppose a line of enquiry, a
  predetermined approach to a problem -
  ...
  But I see no reason why computers couldn't be briefed rather than
  programmed, and freely associate across domains rather than working along
  predetermined lines.
 
  But the computer that is being briefed is still running some software
  program,
  hence is still programmed -- and its responses are still determined by
  that program (in conjunction w/ the environment, which however it
  perceives
  only thru a digital bit stream)
 
  I don't however believe that purely *digital* computers are capable of
  all
  the literally imaginative powers (as already discussed elsewhere) that
  are
  also necessary for true creativity and general intelligence.
 
  I don't know how you define a literally imaginative power.
 
  So, it seems like you are saying
 
  -- digital computer software can never truly be creative or possess
  general
  intelligence
 
  Is this your assertion?
 
  It is not an original one of course: Penrose, Dreyfus and many others have
  argued the same point.   The latter paragraph of yours I've quoted could
  be straight out of The Emeperor's New Mind by Penrose.
 
  Penrose then notes that quantum computers can compute only the same
  stuff that digital computers can; so he posits that general intelligence
  is
  possible only for quantum gravity computers, which is what he posits
  the brain is.
 
  I think Penrose is most probably wrong, but at least I understand what
  he is saying...
 
  I'm just trying to understand what your perspective actually is...
 
 - Release Date: 1/5/2008 11:46 AM
 
 


 -
 This list is sponsored by AGIRI: http://www.agiri.org/email
 To unsubscribe or change your options, please go to:
 http://v2.listbox.com/member/?;


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=82484935-6a7f84


[agi] A Simple Mathematical Test of Cog Sci.

2008-01-05 Thread Mike Tintner

I think I've found a simple test of cog. sci.

I take the basic premise of cog. sci. to be that the human mind - and 
therefore its every activity, or sequence of action - is programmed. Eric 
Baum epitomises cog. sci.Baum proposes [in What Is Thought]  that 
underlying mind is a complex but compact program that corresponds to the 
underlying structure of the world..


As you know, I contend that that is absurd - that, yes, every human 
activity - having a conversation, writing a post, making love, doing a 
drawing etc - is massively subprogrammed, containing often v. large 
numbers of routines - but as a whole, each activity is a free composition. 
Those routines, along with isolated actions,  are more or less freely thrown 
together - freely associated . As a whole, our activities are more or less 
crazy walks - I use crazy to mean both structured and chaotic - and 
effectively self-contradictory.


(This has huge implications for AGI - you guys believe that an AGI must be 
programmed for its activities, I contend that free composition instead is 
essential for truly adaptive, general intelligence and is the basis of all 
animal and human activities).


So how to test cog sci? I contend that the proper, *ideal* test is to record 
humans' actual streams of thought about any problem - like, say, writing an 
essay - and even just a minute's worth will show that, actually, humans have 
major difficulties following anything like a joined-up, rational train of 
thought - or any stream that looks remotely like it could be programmed 
overall. (That includes more esoteric forms of programming like random 
kinds).  Actually, humans follow more or less roving, crazy streams of 
thought - not chaotic by any means, but not perfectly joined up either - 
more or less free-form, a bit like free verse - somewhat structured but only 
loosely).


I still think that this is the proper, essential approach to studying the 
connectedness, programmed or otherwise, of human thought. But it is 
obviously a complicated affair - even if one could record those streams of 
thought absolutely faithfully.


And science likes simple tests/ experiments -  the more mathematical and 
measurable the better.


So here's a simple mathematical test, which everyone can try.

Do an abstract line drawing.  (for let's say 30 secs. - on this particular 
site)


Here are a few of my spontaneous masterpieces:

http://www.imagination3.com/LaunchPage?aFileType=_nolivecachesessionID=message=room_email=[EMAIL PROTECTED]from_name=mike 
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105_194101926_970043768_gbrtranscript=_lscid= 
.


http://www.imagination3.com/LaunchPage?aFileType=_nolivecachesessionID=message=room_email=[EMAIL PROTECTED]from_name=mike 
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105_194033348_926554557_gbrtranscript=_lscid= 
.


http://www.imagination3.com/LaunchPage?aFileType=_nolivecachesessionID=message=room_email=[EMAIL PROTECTED]from_name=mike 
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105_193922629_715992016_gbrtranscript=_lscid= 
.


http://www.imagination3.com/LaunchPage?aFileType=_nolivecachesessionID=message=room_email=[EMAIL PROTECTED]from_name=mike 
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105_193734879_1708083161_gbrtranscript=_lscid= 
.


The beauty of this site is that it does indeed record the actual stream of 
thought/ drawing - and not just the end result. (It would be v. interesting 
to see many other people's tests).


Now you guys are mathematicians - I contend that those drawings are indeed 
crazy, spontaneous, free compositions - they have themes and patterns in 
parts and are by no means entirely random, but they are certainly not 
patterned or programmed overall either.  Can you find an overall pattern or 
program to any of them - let alone a program that underlies ALL of them? 
Or, if you prefer, can you find a suite of programs?


(I guess a more formal way of expressing the test is that on any given page, 
it is possible to draw an infinite number of line drawings which are a) 
structured  b) chaotic  c) crazy (mixtures of both) - and, in principle, 
programmed or non-programmed. And to assert that human activities are 
programmed is, in the final analysis, to assert that there is no such thing 
as a crazy set of lines. But please comment).


What this test shows, I believe, is the bleeding obvious - humans can and do 
produce truly spontaneous,crazy, nonprogrammed,ad hoc, unplanned sequences 
of action. Well, it should be obvious but many of you guys will fight to the 
death to defy the obvious. So one needs a simple test.


It's a considerable historical irony that painting by numbers was born 
very roughly at the same time as AI/ cog sci , c. 1950.


Cog sci. is the view that we live -  paint, eat, copulate, talk, etc. - by 
numbers. That view is wrong.  We live,  paint etc. by free composition. (And 
we find both our own and nature's created forms beautiful or ugly 
precisely because 

Re: [agi] A Simple Mathematical Test of Cog Sci.

2008-01-05 Thread David Butler
I would say that the best way to simulate human intelligence with  
diversity and creativity is to create not one AGI but many. The only  
way to insure diversity and natural selection like our own evolution  
is to simultaneously create multiple AGI's so that we have a better  
chance of the emergence of the best path for the evolution of  
friendly AGI.


I am new to this list. Is there anyone out there who has addressed  
this issue? We have many people who are very gifted with math and  
science who are in the forefront of AGI, but random creativity and  
seat of the pants intuition is a really big part of human evolution.  
If we create multiple AGI's we have a chance that all of our traits  
are developed (in the same way that we are genetically programed)  in  
some way to create a community of sorts that hopefully will be able  
to sustain our legacy of diversity and creative thought.


Dave Butler


On Jan 5, 2008, at 9:52 PM, Mike Tintner wrote:


I think I've found a simple test of cog. sci.

I take the basic premise of cog. sci. to be that the human mind -  
and therefore its every activity, or sequence of action - is  
programmed. Eric Baum epitomises cog. sci.Baum proposes [in What  
Is Thought]  that underlying mind is a complex but compact program  
that corresponds to the underlying structure of the world..


As you know, I contend that that is absurd - that, yes, every human  
activity - having a conversation, writing a post, making love,  
doing a drawing etc - is massively subprogrammed, containing  
often v. large numbers of routines - but as a whole, each activity  
is a free composition. Those routines, along with isolated  
actions,  are more or less freely thrown together - freely  
associated . As a whole, our activities are more or less crazy  
walks - I use crazy to mean both structured and chaotic - and  
effectively self-contradictory.


(This has huge implications for AGI - you guys believe that an AGI  
must be programmed for its activities, I contend that free  
composition instead is essential for truly adaptive, general  
intelligence and is the basis of all animal and human activities).


So how to test cog sci? I contend that the proper, *ideal* test is  
to record humans' actual streams of thought about any problem -  
like, say, writing an essay - and even just a minute's worth will  
show that, actually, humans have major difficulties following  
anything like a joined-up, rational train of thought - or any  
stream that looks remotely like it could be programmed overall.  
(That includes more esoteric forms of programming like random  
kinds).  Actually, humans follow more or less roving, crazy streams  
of thought - not chaotic by any means, but not perfectly joined  
up either - more or less free-form, a bit like free verse -  
somewhat structured but only loosely).


I still think that this is the proper, essential approach to  
studying the connectedness, programmed or otherwise, of human  
thought. But it is obviously a complicated affair - even if one  
could record those streams of thought absolutely faithfully.


And science likes simple tests/ experiments -  the more  
mathematical and measurable the better.


So here's a simple mathematical test, which everyone can try.

Do an abstract line drawing.  (for let's say 30 secs. - on this  
particular site)


Here are a few of my spontaneous masterpieces:

http://www.imagination3.com/LaunchPage? 
aFileType=_nolivecachesessionID=message=room_email=from_email=tin 
[EMAIL PROTECTED]from_name=mike  
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105 
_194101926_970043768_gbrtranscript=_lscid= .


http://www.imagination3.com/LaunchPage? 
aFileType=_nolivecachesessionID=message=room_email=from_email=tin 
[EMAIL PROTECTED]from_name=mike  
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105 
_194033348_926554557_gbrtranscript=_lscid= .


http://www.imagination3.com/LaunchPage? 
aFileType=_nolivecachesessionID=message=room_email=from_email=tin 
[EMAIL PROTECTED]from_name=mike  
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105 
_193922629_715992016_gbrtranscript=_lscid= .


http://www.imagination3.com/LaunchPage? 
aFileType=_nolivecachesessionID=message=room_email=from_email=tin 
[EMAIL PROTECTED]from_name=mike  
tintner[EMAIL PROTECTED]to_name=aDrawingID=20080105 
_193734879_1708083161_gbrtranscript=_lscid= .


The beauty of this site is that it does indeed record the actual  
stream of thought/ drawing - and not just the end result. (It would  
be v. interesting to see many other people's tests).


Now you guys are mathematicians - I contend that those drawings are  
indeed crazy, spontaneous, free compositions - they have themes and  
patterns in parts and are by no means entirely random, but they are  
certainly not patterned or programmed overall either.  Can you find  
an overall pattern or program to any of them - let alone a program  
that underlies ALL of them? Or, if you prefer, can you find a suite  
of