Mike Tintner wrote:
Richard: If someone asked that, I couldn't think of anything to say except ...
why *wouldn't* it be possible?  It would strike me as just not a
question that made any sense, to ask for the exact reasons why it is
possible to paint things that are not representational.

Jeez, Richard, of course, it's possible... we all agree that AGI is possible (well in my case, only with a body). The question is - how? !*? That's what we're here for - to have IDEAS.. rather than handwave... (see, I knew you would) ...in this case, about how a program can be maximally adaptive - change course at any point

Hold on a minute there.

What I have been addressing is just your initial statement:

"Cognitive science treats the human mind as basically a programmed computational machine much like actual programmed computers - and programs are normally conceived of as rational. - coherent sets of steps etc."

The *only* point I have been trying to establish is that when you said "and programs are normally conceived of as rational" this made no sense because programs can do anything at all, rational or irrational.

Now you say "Jeez, Richard, of course, it's possible [to build programs that are either rational or irrational] ..... The question is - how? !*?"

No, that is another question, one that I have not been addressing.

My only goal was to establish that you cannot say that programs built by cognitive scientists are *necessarily* "rational" (in you usage), or that they are "normally conceived of as rational".

Most of the theories/models/programs built by cognitive scientists are completely neutral on the question of "rational" issues of the sort you talk about, because they are about small aspects of cognition where those issues don't have any bearing.

There are an infinite number of ways to build a cognitive model in such a way that it fits your definition of "irrational", just as there are an infinite number of ways to use paint in such a way that the resulting picture is abstract rather than representational. Nothing would be proved by my producing an actual example of an "irrational" cognitive model, just as nothing would be proved by my painting an abstract painting just to prove that that is possible.

I think you have agreed that computers and computational models can in principle be used to produce systems that fit your definition of irrational, and since that is what I was trying to establish, I think we're done, no?

If you don't agree, then there is probably something wrong with your picture of what computers can do (how they can be programmed), and it would be helpful if you would say what exactly it is about them that makes you think this is not possible.

Looking at your suggestion below, I am guessing that you might see an AGI program as involving explicit steps of the sort "If x is true, then consider these factors and then proceed to the next step". That is an extrarodinarily simplistic picture of what copmputers systems, in general are able to do. So simplistic as to be not general at all.

For example, in my system, decisions about what to do next are the result of hundreds or thousands of "atoms" (basic units of knowledge, all of which are active processors) coming together in a very context-dependent way and trying to form coherent models of the situation. This cloud of knowledge atoms will cause an outcome to emerge, but they almost never go through a sequence of steps, like a linear computer program, to generate an outcome. As a result I cannot exactly predict what they will do on a particular occasion (they will have a general consistency in their behavior, but that consistency is not imposed by a sequence of machine instructions, it is emergent).

One of my problems is that it is so obvious to me that programs can do things that do not look "rule governed" that I can hardly imagine anyone would think otherwise. Perhaps that is the source of the misunderstanding here.


Richard Loosemore


Okay here's my v.v. rough idea - the core two lines or principles of a much more complex program - for engaging in any activity, solving any problem - with maximum adaptivity

1. Choose any reasonable path - and any reasonable way to move along it - to the goal. [and then move]

["reasonable" = "likely to be as or more profitable than any of the other paths you have time to consider"]

2. If you have not yet reached the goal, and if you have not any other superior goals ["anything better to do"], choose any other reasonable path - and way of moving - that will lead you closer to the goal.

This presupposes what the human brain clearly has - the hierarchical ability to recognize literally ANYTHING as a "thing", "path", "way of moving"/ "move" or "goal". It can perceive literally anything from these multifunctional perspectives. This presupposes that something like these concepts are fundamental to the brain's operation.

This also presupposes what you might say are - roughly - the basic principles of neuroeconomics and decision theory - that the brain does and any adaptive brain must, continually assess every action for profitability - for its rewards, risks and costs.

[The big deal here is those two words "any" - and any path etc that is "as" profitable - those two words/ concepts give maximal freedom and adaptivity - and true freedom]

What we're talking about here BTW is when you think about it, a truly "universal program" for soving, and learning how to solve, literally any problem.

[Oh, there has to be a third line or clause - and a lot more too of course - that says: 1a. If you can't see any reasonable paths etc - look for some.]

So what are your ideas, Richard, here? Have you actually thought about it? Jeez, what do we pay you all this money for?








-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;



-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=73979642-9cb485

Reply via email to