Mike Tintner wrote:
Richard: Mike,
I think you are going to have to be specific about what you mean by "irrational" because you mostly just say that all the processes that could possibly exist in computers are rational, and I am wondering what else is there that "irrational" could possibly mean. I have named many processes that seem to me to fit the "irrational" definition, but without being too clear about it you have declared them all to be just rational, so now I have no idea what you can be meaning by the word.

Richard,

Er, it helps to read my posts. From my penultimate post to you:

"If a system can change its approach and rules of reasoning at literally any step of
problem-solving, then it is truly "crazy"/ irrational (think of a crazy
path). And it will be capable of producing all the human irrationalities
that I listed previously - like not even defining or answering the problem.
It will by the same token have the capacity to be truly creative, because it
will ipso facto be capable of lateral thinking at any step of
problem-solving. Is your system capable of that? Or anything close? Somehow
I doubt it, or you'd already be claiming the solution to both AGI and
computational creativity."

A rational system follows a set of rules in solving a problem (which can incl. rules that self-modify according to metarules) ; a creative, irrational system can change/break/create any and all rules (incl. metarules) at any point of solving a problem - the ultimate, by definition, in adaptivity. (Much like you, and indeed all of us, change the rules of engagement much of the time in our discussions here).

Listen, no need to reply - because you're obviously not really interested. To me that's ironic, though, because this is absolutely the most central issue there is in AGI. But no matter.

No, I am interested, I was just confused, and I did indeed miss the above definition (got a lot I have to do right now, so am going very fast through my postings) -- sorry about that.

The fact is that the computational models I mentioned (those by Hofstadter etc) are all just attempts to understand part of the problem of how a cognitive system works, and all of them are consistent with the design of a system that is irrational accroding to your above definition. They may look rational, but that is just an illusion: every one of them is so small that it is completely neutral with respect to the rationality of a complete system. They could be used by someone who wanted to build a rational system or an irrational system, it does not matter.

For my own system (and for Hofstadter too), the natural extension of the system to a full AGI design would involve

a system [that] can change its approach and rules of reasoning at literally any step of problem-solving .... it will be capable of
producing all the human irrationalities that I listed previously -
like not even defining or answering the problem. It will by the same
token have the capacity to be truly creative, because it will ipso
facto be capable of lateral thinking at any step of problem-solving.

This is very VERY much part of the design.

I prefer not to use the term "irrational" to describe it (because that has other connotations), but using your definition, it would be irrational.

There is not any problem with doing all of this.

Does this clarify the question?

I think really I would reflect the question back at you and ask why you would think that this is a difficult thing to do? It is not difficult to design a system this way: some people like the trad-AI folks don't do it (yet), and appear not to be trying, but there is nothing in principle that makes it difficult to build a system of this sort.




Richard Loosemore



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73685934-1acb8b

Reply via email to