Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-11 Thread James Ratcliff
irrationality  - is used to describe thinking and actions which are, or appear 
to be, less useful or logical than the other alternatives.
and rational would be the opposite of that.

This line of thinking is more concerned with the behaviour of the entities, 
which requires Goal orienting and other things.

An irrational being is NOT working effectively towards the goal according to 
this.  This may be necessary in order to determine new routes, unique solutions 
to a problem, and according to the description will be included in most AGI's I 
have heard described so far.

The other definition which seems to be in the air around here is 
irrational - acting without reason or logic.

An entity that acts without reason or logic entirely is a totally random being, 
will choose to do something for no reason, and will not ever find any goals or 
solutions without accidentily hitting them.

In AGI terms, any entity given multiple equally rewarding alternative paths to 
a goal may randomly select any of them.
This may be considered acting without reason, as there was no real basis for 
choosing 1 as opposed to 2, but it also may be very reasonable, as given any 
situation where either path can be chosen, choosing one is reasonable.  
(choosing no path at that point would indeed be irrational and pointless)

I havnt seen any solutions proposed that require any real level of acting 
without reason  and neural nets and others are all reasonable, though the 
reasoning may be complex and hidden from us, or hard to understand.

The example given previously about the computer system that changes its 
thinking in the middle of discovering a solution, is not irrational, as it is 
just contuing to follow its rules, it can still change those rules as it 
allows, and may have very good reason for doing so.

James Ratcliff

Mike Tintner [EMAIL PROTECTED] wrote:  Richard: Mike,
 I think you are going to have to be specific about what you mean by 
 irrational because you mostly just say that all the processes that could 
 possibly exist in computers are rational, and I am wondering what else is 
 there that irrational could possibly mean.  I have named many processes 
 that seem to me to fit the irrational definition, but without being too 
 clear about it you have declared them all to be just rational, so now I 
 have no idea what you can be meaning by the word.

Richard,

Er, it helps to read my posts. From my penultimate post to you:

If a system can change its approach and rules of reasoning at literally any 
step of
problem-solving, then it is truly crazy/ irrational (think of a crazy
path). And it will be capable of producing all the human irrationalities
that I listed previously - like not even defining or answering the problem.
It will by the same token have the capacity to be truly creative, because it
will ipso facto be capable of lateral thinking at any step of
problem-solving. Is your system capable of that? Or anything close? Somehow
I doubt it, or you'd already be claiming the solution to both AGI and
computational creativity.

A rational system follows a set of rules in solving a problem  (which can 
incl. rules that self-modify according to metarules) ;  a creative, 
irrational system can change/break/create any and all rules (incl. 
metarules) at any point of solving a problem  -  the ultimate, by 
definition, in adaptivity. (Much like you, and indeed all of us, change the 
rules of engagement much of the time in our discussions here).

Listen, no need to reply - because you're obviously not really interested. 
To me that's ironic, though, because this is absolutely the most central 
issue there is in AGI. But no matter.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;


   
-
Looking for last minute shopping deals?  Find them fast with Yahoo! Search.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74598181-2b0ae5

Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-08 Thread Mike Tintner
Richard: If someone asked that, I couldn't think of anything to say except 
...

why *wouldn't* it be possible?  It would strike me as just not a
question that made any sense, to ask for the exact reasons why it is
possible to paint things that are not representational.

Jeez, Richard, of course, it's possible... we all agree that AGI is possible 
(well in my case, only with a body). The question is - how? !*? That's what 
we're here for -   to have IDEAS.. rather than handwave... (see, I knew you 
would)  ...in this case, about how a program can be maximally adaptive - 
change course at any point


Okay here's my v.v. rough idea - the core two lines or principles of a much 
more complex program  - for engaging in any activity, solving any problem - 
with maximum adaptivity


1. Choose any reasonable path - and any reasonable way to move along it - to 
the goal.   [and then move]


[reasonable = likely to be as or more profitable than any of the other 
paths you have time to consider]


2. If you have not yet reached the goal, and if you have not any other 
superior goals [anything better to do], choose any other reasonable path - 
and way of moving - that will lead you closer to the goal.


This presupposes what the human brain clearly has - the hierarchical ability 
to recognize literally ANYTHING as a thing, path, way of moving/ 
move or goal. It can perceive literally anything from these 
multifunctional perspectives. This presupposes that something like these 
concepts are fundamental to the brain's operation.


This also presupposes what you might say are - roughly - the basic 
principles of neuroeconomics and decision theory - that the brain does and 
any adaptive brain must, continually assess every action for profitability - 
for its rewards, risks and costs.


[The big deal here is those two words  any -   and any path etc that is 
as  profitable -  those two words/ concepts give maximal freedom and 
adaptivity - and true freedom]


What we're talking about here BTW is when you think about it, a truly 
universal program for soving, and learning how to solve, literally any 
problem.


[Oh, there has to be a third line or clause  - and a lot more too of 
course - that says:   1a. If you can't see any reasonable paths etc - look 
for some.]


So what are your ideas, Richard, here? Have you actually thought about it? 
Jeez, what do we pay you all this money for?









-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73929597-fb8991


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-08 Thread Richard Loosemore

Mike Tintner wrote:
Richard: If someone asked that, I couldn't think of anything to say 
except ...

why *wouldn't* it be possible?  It would strike me as just not a
question that made any sense, to ask for the exact reasons why it is
possible to paint things that are not representational.

Jeez, Richard, of course, it's possible... we all agree that AGI is 
possible (well in my case, only with a body). The question is - how? !*? 
That's what we're here for -   to have IDEAS.. rather than handwave... 
(see, I knew you would)  ...in this case, about how a program can be 
maximally adaptive - change course at any point


Hold on a minute there.

What I have been addressing is just your initial statement:

Cognitive science treats the human mind as basically a programmed 
computational machine much like actual programmed computers - and 
programs are normally conceived of as rational. - coherent sets of steps 
etc.


The *only* point I have been trying to establish is that when you said 
and programs are normally conceived of as rational this made no sense 
because programs can do anything at all, rational or irrational.


Now you say Jeez, Richard, of course, it's possible [to build programs 
that are either rational or irrational] . The question is - how? !*?


No, that is another question, one that I have not been addressing.

My only goal was to establish that you cannot say that programs built by 
cognitive scientists are *necessarily* rational (in you usage), or 
that they are normally conceived of as rational.


Most of the theories/models/programs built by cognitive scientists are 
completely neutral on the question of rational issues of the sort you 
talk about, because they are about small aspects of cognition where 
those issues don't have any bearing.


There are an infinite number of ways to build a cognitive model in such 
a way that it fits your definition of irrational, just as there are an 
infinite number of ways to use paint in such a way that the resulting 
picture is abstract rather than representational.  Nothing would be 
proved by my producing an actual example of an irrational cognitive 
model, just as nothing would be proved by my painting an abstract 
painting just to prove that that is possible.


I think you have agreed that computers and computational models can in 
principle be used to produce systems that fit your definition of 
irrational, and since that is what I was trying to establish, I think 
we're done, no?


If you don't agree, then there is probably something wrong with your 
picture of what computers can do (how they can be programmed), and it 
would be helpful if you would say what exactly it is about them that 
makes you think this is not possible.


Looking at your suggestion below, I am guessing that you might see an 
AGI program as involving explicit steps of the sort If x is true, then 
consider these factors and then proceed to the next step.  That is an 
extrarodinarily simplistic picture of what copmputers systems, in 
general are able to do.  So simplistic as to be not general at all.


For example, in my system, decisions about what to do next are the 
result of hundreds or thousands of atoms (basic units of knowledge, 
all of which are active processors) coming together in a very 
context-dependent way and trying to form coherent models of the 
situation.  This cloud of knowledge atoms will cause an outcome to 
emerge, but they almost never go through a sequence of steps, like a 
linear computer program, to generate an outcome.  As a result I cannot 
exactly predict what they will do on a particular occasion (they will 
have a general consistency in their behavior, but that consistency is 
not imposed by a sequence of machine instructions, it is emergent).


One of my problems is that it is so obvious to me that programs can do 
things that do not look rule governed that I can hardly imagine anyone 
would think otherwise.  Perhaps that is the source of the 
misunderstanding here.



Richard Loosemore


Okay here's my v.v. rough idea - the core two lines or principles of a 
much more complex program  - for engaging in any activity, solving any 
problem - with maximum adaptivity


1. Choose any reasonable path - and any reasonable way to move along it 
- to the goal.   [and then move]


[reasonable = likely to be as or more profitable than any of the 
other paths you have time to consider]


2. If you have not yet reached the goal, and if you have not any other 
superior goals [anything better to do], choose any other reasonable 
path - and way of moving - that will lead you closer to the goal.


This presupposes what the human brain clearly has - the hierarchical 
ability to recognize literally ANYTHING as a thing, path, way of 
moving/ move or goal. It can perceive literally anything from these 
multifunctional perspectives. This presupposes that something like these 
concepts are fundamental to the brain's operation.


This also presupposes what you 

Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-08 Thread Mike Tintner

Richard:  in my system, decisions about what to do next are the
result of hundreds or thousands of atoms (basic units of knowledge,
all of which are active processors) coming together in a very
context-dependent way and trying to form coherent models of the
situation.  This cloud of knowledge atoms will cause an outcome to
emerge, but they almost never go through a sequence of steps, like a
linear computer program, to generate an outcome.  As a result I cannot
exactly predict what they will do on a particular occasion (they will
have a general consistency in their behavior, but that consistency is
not imposed by a sequence of machine instructions, it is emergent).


Sounds - just a tad - like somewhat recent Darwinian selection ideas of how 
the brain thinks. Do you think the brain actually thinks in your way? 
Doesn't have to - but you claim to be based on the brain. (You don't have a 
self engaged in conscious, to be or not to be,decisionmaking, I take it?)




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73983345-d15736


RE: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-08 Thread Ed Porter
Mike,

When I write about my system, (which sounds like it is designed somewhat
like yours), I am talking about a system that has only been thought about
deeply, but never yet built.

When you write about my system do you actually have something up and
running?  If so, hats off to you.  

And, if so, how much do you have up and running, how much of it can you
describe, and what sorts of things can it do and how well does it work?

Ed Porter

-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Saturday, December 08, 2007 4:16 PM
To: agi@v2.listbox.com
Subject: Re: Human Irrationality [WAS Re: [agi] None of you seem to be able
..]

Richard:  in my system, decisions about what to do next are the
result of hundreds or thousands of atoms (basic units of knowledge,
all of which are active processors) coming together in a very
context-dependent way and trying to form coherent models of the
situation.  This cloud of knowledge atoms will cause an outcome to
emerge, but they almost never go through a sequence of steps, like a
linear computer program, to generate an outcome.  As a result I cannot
exactly predict what they will do on a particular occasion (they will
have a general consistency in their behavior, but that consistency is
not imposed by a sequence of machine instructions, it is emergent).


Sounds - just a tad - like somewhat recent Darwinian selection ideas of how 
the brain thinks. Do you think the brain actually thinks in your way? 
Doesn't have to - but you claim to be based on the brain. (You don't have a 
self engaged in conscious, to be or not to be,decisionmaking, I take it?)



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73985772-4d045e

Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-08 Thread Richard Loosemore

Ed Porter wrote:

Mike,

When I write about my system, (which sounds like it is designed somewhat
like yours), I am talking about a system that has only been thought about
deeply, but never yet built.

When you write about my system do you actually have something up and
running?  If so, hats off to you.  


And, if so, how much do you have up and running, how much of it can you
describe, and what sorts of things can it do and how well does it work?

Ed Porter


You presumably meant the question for me, since I was the one who said 
my system in the quote below.


The answer is that I do have a great deal of code implementing various 
aspects of my system, but questions like how well does it work are 
premature:  I am experimenting with mechanisms, and building all the 
tools needed to do more systematic experiments on those mechanisms, not 
attempting to build the entire system yet.


For the most part, though, I use the phrase my system to mean the 
architecture, which is more detailed than the particular code I have 
written.




Richard Loosemore



-Original Message-
From: Mike Tintner [mailto:[EMAIL PROTECTED] 
Sent: Saturday, December 08, 2007 4:16 PM

To: agi@v2.listbox.com
Subject: Re: Human Irrationality [WAS Re: [agi] None of you seem to be able
..]

Richard:  in my system, decisions about what to do next are the
result of hundreds or thousands of atoms (basic units of knowledge,
all of which are active processors) coming together in a very
context-dependent way and trying to form coherent models of the
situation.  This cloud of knowledge atoms will cause an outcome to
emerge, but they almost never go through a sequence of steps, like a
linear computer program, to generate an outcome.  As a result I cannot
exactly predict what they will do on a particular occasion (they will
have a general consistency in their behavior, but that consistency is
not imposed by a sequence of machine instructions, it is emergent).


Sounds - just a tad - like somewhat recent Darwinian selection ideas of how 
the brain thinks. Do you think the brain actually thinks in your way? 
Doesn't have to - but you claim to be based on the brain. (You don't have a 
self engaged in conscious, to be or not to be,decisionmaking, I take it?)




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=74001696-312be4


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Richard Loosemore

Mike Tintner wrote:
Well, I'm not sure if  not doing logic necessarily means a system is 
irrational, i.e if rationality equates to logic.  Any system 
consistently followed can classify as rational. If for example, a 
program consistently does Freudian free association and produces nothing 
but a chain of associations with some connection:


bird - - feathers - four..tops 

or on the contrary, a 'nonsense' chain where there is NO connection..

logic.. sex... ralph .. essence... pi... Loosemore...

then it is rational - it consistently follows a system with a set of 
rules. And the rules could, for argument's sake, specify that every step 
is illogical - as in breaking established rules of logic - or that steps 
are alternately logical and illogical.  That too would be rational. 
Neural nets from the little I know are also rational inasmuch as they 
follow rules. Ditto Hofstadter  Johnson-Laird from again the little I 
know also seem rational - Johnson-Laird's jazz improvisation program 
from my cursory reading seemed rational and not truly creative.


Sorry to be brief, but:

This raises all sorts of deep issues about what exactly you would mean 
by rational.  If a bunch of things (computational processes) come 
together and each contribute something to a decision that results in 
an output, and the exact output choice depends on so many factors coming 
together that it would not necessarily be the same output if roughly the 
same situation occurred another time, and if none of these things looked 
like a rule of any kind, then would you still call it rational?


If the answer is yes then whatever would count as not rational?


Richard Loosemore



I do not know enough to pass judgment on your system, but  you do strike 
me as a rational kind of guy (although probably philosophically much 
closer to me than most here  as you seem to indicate).  Your attitude to 
emotions seems to me rational, and your belief that you can produce an 
AGI that will almost definitely be cooperative , also bespeaks rationality.


In the final analysis, irrationality = creativity (although I'm using 
the word with a small c, rather than the social kind, where someone 
produces a new idea that no one in society has had or published before). 
If a system can change its approach and rules of reasoning at literally 
any step of problem-solving, then it is truly crazy/ irrational (think 
of a crazy path). And it will be capable of producing all the human 
irrationalities that I listed previously - like not even defining or 
answering the problem. It will by the same token have the capacity to be 
truly creative, because it will ipso facto be capable of lateral 
thinking at any step of problem-solving. Is your system capable of that? 
Or anything close? Somehow I doubt it, or you'd already be claiming the 
solution to both AGI and computational creativity.


But yes, please do send me your paper.

P.S. I hope you won't -  I actually don't think - that you will get all 
pedantic on me like so many AI-ers  say ah but we already have 
programs that can modify their rules. Yes, but they do that according 
to metarules - they are still basically rulebound. A crazy/ creative 
program is rulebreaking (and rulecreating) - can break ALL the rules, 
incl. metarules. Rulebound/rulebreaking is one of the most crucial 
differences between narrow AI/AGI.



Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational.  My system is not rational in that sense at all.

Richard,

Out of interest, rather than pursuing the original argument:

1) Who are these programmers/ systembuilders who try to create 
programs (and what are the programs/ systems) that are either 
irrational or non-rational  (and described  as such)?


I'm a little partied out right now, so all I have time for is to 
suggest: Hofstadter's group builds all kinds of programs that do 
things without logic.  Phil Johnson-Laird (and students) used to try 
to model reasoning ability using systems that did not do logic.  All 
kinds of language processing people use various kinds of neural nets:  
see my earlier research papers with Gordon Brown et al, as well as 
folks like Mark Seidenberg, Kim Plunkett etc.  Marslen-Wilson and 
Tyler used something called a Cohort Model to describe some aspects 
of language.


I am just dragging up the name of anyone who has ever done any kind of 
computer modelling of some aspect of cognition:  all of these people 
do not use systems that do any kind of logical processing.  I could 
go on indefinitely.  There are probably hundreds of them.  They do not 
try to build complete systems, of course, just local models.



When I have proposed (in different threads) that the mind is not 
rationally, algorithmically programmed I have been met with uniform 
and often fierce resistance both on this and another AI forum.


Hey, join the club!  You have read my little brouhaha with 

Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Richard Loosemore


Mike,

I think you are going to have to be specific about what you mean by 
irrational because you mostly just say that all the processes that 
could possibly exist in computers are rational, and I am wondering what 
else is there that irrational could possibly mean.  I have named many 
processes that seem to me to fit the irrational definition, but 
without being too clear about it you have declared them all to be just 
rational, so now I have no idea what you can be meaning by the word.



Richard Loosemore


Mike Tintner wrote:
Richard:This raises all sorts of deep issues about what exactly you 
would mean

by rational.  If a bunch of things (computational processes) come
together and each contribute something to a decision that results in
an output, and the exact output choice depends on so many factors coming
together that it would not necessarily be the same output if roughly the
same situation occurred another time, and if none of these things looked
like a rule of any kind, then would you still call it rational?If 
the answer is yes then whatever would count as not rational?


I'm not sure what you mean - but this seems consistent with other 
impressions I've been getting of your thinking.


Let me try and cut through this: if science were to change from its 
prevailing conception of the human mind as a rational, computational 
machine to what I am suggesting - i.e. a creative, compositional, 
irrational machine - we would be talking of a major revolution that 
would impact right through the sciences - and radically extend the scope 
of scientific investigation into human thought. It would be the end of 
the deterministic conception of humans and animals and ultimately be a 
revolution of Darwinian proportions.


Hofstadter  co are absolutely not revolutionaries. Johnson-Laird 
conceives of the human mind as an automaton. None of them are 
fundamentally changing the prevailing conceptions of cognitive science. 
No one has reacted to them with shock or horror or delight.


I suspect that what you are talking about is loosely akin to the ideas 
of some that quantum mechanics has changed scientific determinism. It 
hasn't - the fact that we can't measure certain quantum phenomena with 
precision does not mean that they are not fundamentally deterministic. 
And science remains deterministic.


Similarly, if you make a computer system very complex, keep changing the 
factors involved in computations, add random factors  whatever, you are 
not necessarily making it non-rational. You make it v. difficult to 
understand the computer's rationality, (and possibly extend our 
conception of rationality), but the system may still be basically 
rational, just as quantum particles are still in all probability 
basically deterministic.


As a side-issue, I don't believe that human reasoning, conscious and 
unconscious, is  remotely, even infinitesimally as complex as that of 
the AI systems you guys all seem to be building. The human brain surely 
never seizes up with the kind of complex, runaway calculations that 
y'all have been conjuring up in your arguments. That only happens when 
you have a rational system that obeys basically rigid (even if complex) 
rules.  The human brain is cleverer than that - it doesn't have any 
definite rules for any activities. In fact, you should be so lucky as to 
have a nice, convenient set of rules, even complex ones,  to guide you 
when you sit down to write your computer programs.




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?;




-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73610112-93352e


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Richard Loosemore

Mike Tintner wrote:

Richard: Mike,
I think you are going to have to be specific about what you mean by 
irrational because you mostly just say that all the processes that 
could possibly exist in computers are rational, and I am wondering 
what else is there that irrational could possibly mean.  I have 
named many processes that seem to me to fit the irrational 
definition, but without being too clear about it you have declared 
them all to be just rational, so now I have no idea what you can be 
meaning by the word.



Richard,

Er, it helps to read my posts. From my penultimate post to you:

If a system can change its approach and rules of reasoning at literally 
any step of

problem-solving, then it is truly crazy/ irrational (think of a crazy
path). And it will be capable of producing all the human irrationalities
that I listed previously - like not even defining or answering the problem.
It will by the same token have the capacity to be truly creative, 
because it

will ipso facto be capable of lateral thinking at any step of
problem-solving. Is your system capable of that? Or anything close? Somehow
I doubt it, or you'd already be claiming the solution to both AGI and
computational creativity.

A rational system follows a set of rules in solving a problem  (which 
can incl. rules that self-modify according to metarules) ;  a creative, 
irrational system can change/break/create any and all rules (incl. 
metarules) at any point of solving a problem  -  the ultimate, by 
definition, in adaptivity. (Much like you, and indeed all of us, change 
the rules of engagement much of the time in our discussions here).


Listen, no need to reply - because you're obviously not really 
interested. To me that's ironic, though, because this is absolutely the 
most central issue there is in AGI. But no matter.


No, I am interested, I was just confused, and I did indeed miss the 
above definition (got a lot I have to do right now, so am going very 
fast through my postings) -- sorry about that.


The fact is that the computational models I mentioned (those by 
Hofstadter etc) are all just attempts to understand part of the problem 
of how a cognitive system works, and all of them are consistent with the 
design of a system that is irrational accroding to your above 
definition.  They may look rational, but that is just an illusion: 
every one of them is so small that it is completely neutral with respect 
to the rationality of a complete system.  They could be used by someone 
who wanted to build a rational system or an irrational system, it does 
not matter.


For my own system (and for Hofstadter too), the natural extension of the 
system to a full AGI design would involve


a system [that] can change its approach and rules of reasoning at 
literally any step of problem-solving  it will be capable of

producing all the human irrationalities that I listed previously -
like not even defining or answering the problem. It will by the same
token have the capacity to be truly creative, because it will ipso
facto be capable of lateral thinking at any step of problem-solving.


This is very VERY much part of the design.

I prefer not to use the term irrational to describe it (because that 
has other connotations), but using your definition, it would be irrational.


There is not any problem with doing all of this.

Does this clarify the question?

I think really I would reflect the question back at you and ask why you 
would think that this is a difficult thing to do?  It is not difficult 
to design a system this way:  some people like the trad-AI folks don't 
do it (yet), and appear not to be trying, but there is nothing in 
principle that makes it difficult to build a system of this sort.





Richard Loosemore



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73685934-1acb8b


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Mike Tintner

Richard: Mike,
I think you are going to have to be specific about what you mean by 
irrational because you mostly just say that all the processes that could 
possibly exist in computers are rational, and I am wondering what else is 
there that irrational could possibly mean.  I have named many processes 
that seem to me to fit the irrational definition, but without being too 
clear about it you have declared them all to be just rational, so now I 
have no idea what you can be meaning by the word.



Richard,

Er, it helps to read my posts. From my penultimate post to you:

If a system can change its approach and rules of reasoning at literally any 
step of

problem-solving, then it is truly crazy/ irrational (think of a crazy
path). And it will be capable of producing all the human irrationalities
that I listed previously - like not even defining or answering the problem.
It will by the same token have the capacity to be truly creative, because it
will ipso facto be capable of lateral thinking at any step of
problem-solving. Is your system capable of that? Or anything close? Somehow
I doubt it, or you'd already be claiming the solution to both AGI and
computational creativity.

A rational system follows a set of rules in solving a problem  (which can 
incl. rules that self-modify according to metarules) ;  a creative, 
irrational system can change/break/create any and all rules (incl. 
metarules) at any point of solving a problem  -  the ultimate, by 
definition, in adaptivity. (Much like you, and indeed all of us, change the 
rules of engagement much of the time in our discussions here).


Listen, no need to reply - because you're obviously not really interested. 
To me that's ironic, though, because this is absolutely the most central 
issue there is in AGI. But no matter.



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73661748-adcbd5


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-07 Thread Richard Loosemore

Mike Tintner wrote:
Richard:For my own system (and for Hofstadter too), the natural 
extension of the

system to a full AGI design would involve

a system [that] can change its approach and rules of reasoning at 
literally any step of problem-solving  it will be capable of

producing all the human irrationalities that I listed previously -
like not even defining or answering the problem. It will by the same
token have the capacity to be truly creative, because it will ipso
facto be capable of lateral thinking at any step of problem-solving.


This is very VERY much part of the design.

There is not any problem with doing all of this.

Does this clarify the question?

I think really I would reflect the question back at you and ask why you
would think that this is a difficult thing to do?

Richard,

Fine. Sounds interesting. But you don't actually clarify or explain 
anything. Why don't you explain how you or anyone else can fundamentally 
change your approach/rules at any point of solving a problem?


Why don't you, just  in plain English, - in philosophical as opposed to 
programming form  - set out the key rules or principles that allow you 
or anyone else to do this? I have never seen such key rules or 
principles anywhere, nor indeed even adumbrated anywhere. (Fancy word, 
but it just came to mind). And since they are surely a central problem 
for AGI - and no one has solved AGI - how on earth could I not think 
this a difficult matter?


I have some v. rough ideas about this, which I can gladly set out.  But 
I'd like to hear yours -   you should be able to do it briefly. But 
please, no handwaving.


I will try to think about your question when I can but meanwhile think 
about this:  if we go back to the analogy of painting and whether or not 
it can be used to depict things that are abstract or 
non-representational, how would you respond to someone who wanted exact 
details of how come painting could allow that to be possible.?


If someone asked that, I couldn't think of anything to say except ... 
why *wouldn't* it be possible?  It would strike me as just not a 
question that made any sense, to ask for the exact reasons why it is 
possible to paint things that are not representational.


I simply cannot understand why anyone would think it not possible to do 
that.  It is possible:  it is not easy to do it right, but that's not 
the point.  Computers can be used to program systems of any sort 
(including deeply irrational things like Microsoft Office), so why would 
anyone think that AGI systems must exhibit only a certain sort of design?


This isn't handwaving, it is just genuine bafflement.




Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73903282-a471b6


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Mike Tintner

Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational.  My system is not rational in that sense at all.

Richard,

Out of interest, rather than pursuing the original argument:

1) Who are these programmers/ systembuilders who try to create programs (and 
what are the programs/ systems) that are either irrational or 
non-rational  (and described  as such)?


When I have proposed (in different threads) that the mind is not rationally, 
algorithmically programmed I have been met with uniform and often fierce 
resistance both on this and another AI forum. My argument re the philosophy 
of mind of  cog sci  other sciences is of course not based on such 
reactions, but they do confirm my argument. And the position you at first 
appear to be adopting is unique both in my experience and my reading.


2) How is your system not rational? Does it not use algorithms?

And could you give a specific example or two of the kind of problem that it 
deals with - non-rationally?  (BTW I don't think I've seen any problem 
examples for your system anywhere, period  - for all I know, it could be 
designed to read children' stories, bomb Iraq, do syllogisms, work out your 
domestic budget, or work out the meaning of life - or play and develop in 
virtual worlds).



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73382084-a9590d


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Richard Loosemore

Mike Tintner wrote:

Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational.  My system is not rational in that sense at all.

Richard,

Out of interest, rather than pursuing the original argument:

1) Who are these programmers/ systembuilders who try to create programs 
(and what are the programs/ systems) that are either irrational or 
non-rational  (and described  as such)?


I'm a little partied out right now, so all I have time for is to 
suggest:  Hofstadter's group builds all kinds of programs that do things 
without logic.  Phil Johnson-Laird (and students) used to try to model 
reasoning ability using systems that did not do logic.  All kinds of 
language processing people use various kinds of neural nets:  see my 
earlier research papers with Gordon Brown et al, as well as folks like 
Mark Seidenberg, Kim Plunkett etc.  Marslen-Wilson and Tyler used 
something called a Cohort Model to describe some aspects of language.


I am just dragging up the name of anyone who has ever done any kind of 
computer modelling of some aspect of cognition:  all of these people do 
not use systems that do any kind of logical processing.  I could go on 
indefinitely.  There are probably hundreds of them.  They do not try to 
build complete systems, of course, just local models.



When I have proposed (in different threads) that the mind is not 
rationally, algorithmically programmed I have been met with uniform and 
often fierce resistance both on this and another AI forum. 


Hey, join the club!  You have read my little brouhaha with Yudkowsky 
last year I presume?  A lot of AI people have their heads up their 
asses, so yes, they believe that rationality is God.


It does depend how you put it though:  sometimes you use rationality to 
not mean what they mean, so that might explain the ferocity.



My argument
re the philosophy of mind of  cog sci  other sciences is of course not 
based on such reactions, but they do confirm my argument. And the 
position you at first appear to be adopting is unique both in my 
experience and my reading.


2) How is your system not rational? Does it not use algorithms?


It uses dynamic relaxation in a generalized neural net.  Too much to 
explain in a hurry.



And could you give a specific example or two of the kind of problem that 
it deals with - non-rationally?  (BTW I don't think I've seen any 
problem examples for your system anywhere, period  - for all I know, it 
could be designed to read children' stories, bomb Iraq, do syllogisms, 
work out your domestic budget, or work out the meaning of life - or play 
and develop in virtual worlds).


I am playing this close, for the time being, but I have released a small 
amount of it in a forthcoming neuroscience paper.  I'll send it to you 
tomorrow if you like, but it does not go into a lot of detail.



Richard Loosemore

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244id_secret=73425500-35e13a


Re: Human Irrationality [WAS Re: [agi] None of you seem to be able ...]

2007-12-06 Thread Mike Tintner
Well, I'm not sure if  not doing logic necessarily means a system is 
irrational, i.e if rationality equates to logic.  Any system consistently 
followed can classify as rational. If for example, a program consistently 
does Freudian free association and produces nothing but a chain of 
associations with some connection:


bird - - feathers - four..tops 

or on the contrary, a 'nonsense' chain where there is NO connection..

logic.. sex... ralph .. essence... pi... Loosemore...

then it is rational - it consistently follows a system with a set of rules. 
And the rules could, for argument's sake, specify that every step is 
illogical - as in breaking established rules of logic - or that steps are 
alternately logical and illogical.  That too would be rational. Neural nets 
from the little I know are also rational inasmuch as they follow rules. 
Ditto Hofstadter  Johnson-Laird from again the little I know also seem 
rational - Johnson-Laird's jazz improvisation program from my cursory 
reading seemed rational and not truly creative.


I do not know enough to pass judgment on your system, but  you do strike me 
as a rational kind of guy (although probably philosophically much closer to 
me than most here  as you seem to indicate).  Your attitude to emotions 
seems to me rational, and your belief that you can produce an AGI that will 
almost definitely be cooperative , also bespeaks rationality.


In the final analysis, irrationality = creativity (although I'm using the 
word with a small c, rather than the social kind, where someone produces a 
new idea that no one in society has had or published before). If a system 
can change its approach and rules of reasoning at literally any step of 
problem-solving, then it is truly crazy/ irrational (think of a crazy 
path). And it will be capable of producing all the human irrationalities 
that I listed previously - like not even defining or answering the problem. 
It will by the same token have the capacity to be truly creative, because it 
will ipso facto be capable of lateral thinking at any step of 
problem-solving. Is your system capable of that? Or anything close? Somehow 
I doubt it, or you'd already be claiming the solution to both AGI and 
computational creativity.


But yes, please do send me your paper.

P.S. I hope you won't -  I actually don't think - that you will get all 
pedantic on me like so many AI-ers  say ah but we already have programs 
that can modify their rules. Yes, but they do that according to metarules - 
they are still basically rulebound. A crazy/ creative program is 
rulebreaking (and rulecreating) - can break ALL the rules, incl. metarules. 
Rulebound/rulebreaking is one of the most crucial differences between narrow 
AI/AGI.



Richard: In the same way computer programs are completely
neutral and can be used to build systems that are either rational or
irrational.  My system is not rational in that sense at all.

Richard,

Out of interest, rather than pursuing the original argument:

1) Who are these programmers/ systembuilders who try to create programs 
(and what are the programs/ systems) that are either irrational or 
non-rational  (and described  as such)?


I'm a little partied out right now, so all I have time for is to suggest: 
Hofstadter's group builds all kinds of programs that do things without 
logic.  Phil Johnson-Laird (and students) used to try to model reasoning 
ability using systems that did not do logic.  All kinds of language 
processing people use various kinds of neural nets:  see my earlier 
research papers with Gordon Brown et al, as well as folks like Mark 
Seidenberg, Kim Plunkett etc.  Marslen-Wilson and Tyler used something 
called a Cohort Model to describe some aspects of language.


I am just dragging up the name of anyone who has ever done any kind of 
computer modelling of some aspect of cognition:  all of these people do 
not use systems that do any kind of logical processing.  I could go on 
indefinitely.  There are probably hundreds of them.  They do not try to 
build complete systems, of course, just local models.



When I have proposed (in different threads) that the mind is not 
rationally, algorithmically programmed I have been met with uniform and 
often fierce resistance both on this and another AI forum.


Hey, join the club!  You have read my little brouhaha with Yudkowsky last 
year I presume?  A lot of AI people have their heads up their asses, so 
yes, they believe that rationality is God.


It does depend how you put it though:  sometimes you use rationality to 
not mean what they mean, so that might explain the ferocity.



My argument
re the philosophy of mind of  cog sci  other sciences is of course not 
based on such reactions, but they do confirm my argument. And the 
position you at first appear to be adopting is unique both in my 
experience and my reading.


2) How is your system not rational? Does it not use algorithms?


It uses dynamic relaxation in a generalized neural