Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread Jean-Paul Van Belle
You're mostly correct about the word symbols (barring onomatopoeic words such as bang hum clipclop boom hiss howl screech fizz murmur clang buzz whine tinkle sizzle twitter as well as prefixes, suffixes and derived wordforms which all allow one to derive some meaning). However you are NOT correct

Re: [agi] rule-based NL system

2007-05-01 Thread rooftop8000
we already have programming languages. we want computers to understand natural language because we think: if you know the syntax, the semantics follow easily. you still need the code to process the object the text are about. so it will always be a crippled NL understanding without general

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread Mike Tintner
MD:What does warm look like? How about angry or happy? Can you draw a picture of abstract or indeterminate? I understand (i think) where you are coming from, and I agree wholeheartedly - up to the point where you seem to imply that a picture of something is the totality of its character. I

Re: [agi] The role of incertainty

2007-05-01 Thread Pei Wang
You can take NARS (http://nars.wang.googlepages.com/) as an example, starting at http://nars.wang.googlepages.com/wang.logic_intelligence.pdf Pei On 5/1/07, rooftop8000 [EMAIL PROTECTED] wrote: It seems a lot of posts on this list are about the properties an AGI should have. PLURALISTIC,

Re: [agi] The role of incertainty

2007-05-01 Thread Mike Tintner
Pei, Glad to see your input. I noticed NARS quite by accident many years ago remembered it as pos. v. important. You certainly are implementing the principles we have just been discussing - which is exciting. However, reading your papers Ben's, it's becoming clear that there may well be

Re: [agi] The role of incertainty

2007-05-01 Thread Pei Wang
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote: Pei, Glad to see your input. I noticed NARS quite by accident many years ago remembered it as pos. v. important. You certainly are implementing the principles we have just been discussing - which is exciting. However, reading your papers

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread Derek Zahn
Mike Tintner writes: It goes ALL THE WAY. Language is backed by SENSORY images - the whole range. ALL your assumptions about how language can't be cashed out by images and graphics will be similarly illiterate - or, literally, UNIMAGINATIVE. I don't doubt that the visual and other sensory

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel
However, reading your papers Ben's, it's becoming clear that there may well be an industry-wide bad practice going on here. You guys all focus on how your systems WORK... The first thing anyone trying to understand your or any other system must know is what does it DO? What are the problems

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread Mike Tintner
capitalism complement democracy- it took your brain 13-20 years to be able to understand the above sentence. Much much more than it takes a child to understand blue and red look nice together... [blue complements red]. Your brain had to build up a vast relevant picture tree to understand that

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread DEREK ZAHN
Mike Tintner writes: And.. by now you should get the idea. And the all-important thing here is that if you want to TEST or question the above sentence, the only way to do it successfully is to go back and look at the reality. If you wanted to argue, well look at China, they're rocketing

Re: [agi] The role of incertainty

2007-05-01 Thread Pei Wang
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote: Define the type of problems it addresses which might be [for all I know] *understanding and precis-ing a set of newspaper stories about politics or sewage *solving a crime of murder - starting with limited evidence *designing new types of

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread Bob Mottram
On 01/05/07, Mike Tintner [EMAIL PROTECTED] wrote: There is no choice about all this. You do not have an option to have a pure language AGI - if you wish any brain to understand the world, and draw further connections about the world, it HAS to operate with graphics and images. Period. Plato's

[agi] Pure reason is a disease.

2007-05-01 Thread Mark Waser
From the Boston Globe (http://www.boston.com/news/education/higher/articles/2007/04/29/hearts__minds/?page=full) Antonio Damasio, a neuroscientist at USC, has played a pivotal role in challenging the old assumptions and establishing emotions as an important scientific subject. When Damasio

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Benjamin Goertzel
Well, this tells you something interesting about the human cognitive architecture, but not too much about intelligence in general... I think the dichotomy btw feeling and thinking is a consequence of the limited reflective capabilities of the human brain... I wrote about this in The Hidden

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread DEREK ZAHN
Bob Mottram writes: When you're reading a book or an email I think what you're doing is tieing your internal simulation processes to the stream of words Then it would be crucial to understand these simulation processes. For some very visual things I think I can follow what I think you are

The Imagery Debate [WAS Re: [agi] MONISTIC .......]

2007-05-01 Thread Richard Loosemore
Bob Mottram wrote: On 01/05/07, Mike Tintner [EMAIL PROTECTED] wrote: There is no choice about all this. You do not have an option to have a pure language AGI - if you wish any brain to understand the world, and draw further connections about the world, it HAS to operate with graphics and

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread DEREK ZAHN
To elaborate a bit: It seems likely to me that our minds work with the mechanisms of perception when appropriate -- that is, when the concepts are not far from sensory modalities. This type of concept is basically all that animals have and is probably most of what we have. Somehow, though, we

Re: [agi] AGI project goals (was: The role of incertainty)

2007-05-01 Thread Pei Wang
On 5/1/07, Peter Voss [EMAIL PROTECTED] wrote: Pei does research (great stuff, I might add). I personally think it a pity that his approach is not part of any development project. Peter: thanks for the comment, though I do consider myself as doing development all the time --- as proof of

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote: Well, that really frustrates me. You just can't produce a machine that's going to work, unless you start with its goal/function. I think you are making an error of projecting the methodologies that are appropriate for narrow-purpose-specific

Re: The Imagery Debate [WAS Re: [agi] MONISTIC .......]

2007-05-01 Thread Benjamin Goertzel
The conclusion of that debate was that (a) images definitely play a role in intelligence, and (b) non-imagistic (propositional) entities also definitely play a role in intelligence, and (c) it is difficult to be sure whether there are two separate kinds of representation or one kind that can

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Mark Waser
Well, this tells you something interesting about the human cognitive architecture, but not too much about intelligence in general... How do you know that it doesn't tell you much about intelligence in general? That was an incredibly dismissive statement. Can you justify it? I think the

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread Bob Mottram
On 01/05/07, DEREK ZAHN [EMAIL PROTECTED] wrote: what exactly do you think my internal simulation processes might be doing when I read the following sentence from your email? In short, imagery from visual, acoustic and other sensory modalities give life through simulation to the basic skeletal

Re: [agi] The role of incertainty

2007-05-01 Thread Mike Tintner
IN the final analysis, Ben, you're giving me excuses rather than solutions. Your pet control program is a start - at least I have a vague, still v. vague idea of what you might be doing. You could (I'm guessing) say : this AGI is designed to control a pet which will have to solve adaptive

Re: [agi] rule-based NL system

2007-05-01 Thread rooftop8000
i meant programs that reason about the code you give them. but never mind --- Mark Waser [EMAIL PROTECTED] wrote: we want computers to understand natural language because we think: if you know the syntax, the semantics follow easily Huh? We don't think anything of the sort. Syntax is

Re: [agi] rule-based NL system

2007-05-01 Thread Mark Waser
i meant programs that reason about the code you give them. I did too. If a program can reason like that, unless it only works in a very small domain, you've created AGI. - Original Message - From: rooftop8000 [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Tuesday, May 01, 2007

Re: [agi] rule-based NL system

2007-05-01 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: we want computers to understand natural language because we think: if you know the syntax, the semantics follow easily Huh? We don't think anything of the sort. Syntax is relatively easy. Semantics are AGI. Not really. Semantics is an easier

Re: [agi] MONISTIC, CLOSED-ENDED AI VS PLURALISTIC, OPEN-ENDED AGI

2007-05-01 Thread DEREK ZAHN
Bob Mottram writes: Some things can be not so long as others. ... Thanks for taking the time for such in-depth descriptions, but I am still not clear what you are getting at. Much of what you write is a context in which the meaning of a term might have been learned, sometimes with multiple

Re: [agi] rule-based NL system

2007-05-01 Thread Mark Waser
Not really. Semantics is an easier problem. If so, then why When you write a compiler, you develop it in this order: lexical, syntax, semantics. Information retrieval and text classification systems work pretty well by ignoring word order. Semantics is defined as the study of meaning.

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel
P.S. This is a truly weird conversation. It's like you're saying..Hell it's a box, why should I have to tell you what my box does? Only insiders care what's inside the box. The rest of the world wants to know what it does - and that's the only way they'll buy it and pay attention to it - and

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Benjamin Goertzel
On 5/1/07, Mark Waser [EMAIL PROTECTED] wrote: Well, this tells you something interesting about the human cognitive architecture, but not too much about intelligence in general... How do you know that it doesn't tell you much about intelligence in general? That was an incredibly dismissive

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Mark Waser
My point, in that essay, is that the nature of human emotions is rooted in the human brain architecture, I'll agree that human emotions are rooted in human brain architecture but there is also the question -- is there something analogous to emotion which is generally necessary for

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Russell Wallace
On 5/1/07, Mark Waser [EMAIL PROTECTED] wrote: I'll agree that human emotions are rooted in human brain architecture but there is also the question -- is there something analogous to emotion which is generally necessary for *effective* intelligence? My answer is a qualified but definite

Re: [agi] The role of incertainty

2007-05-01 Thread Mike Tintner
Nah, analogy doesn't quite work - though could be useful. An engine is used to move things... many different things - wheels, levers, etc. So if you've got an engine that is twenty times more powerful, sure you don't need to tell me what particular things it is going to move. It's generally

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel
Not much point in arguing further here - all I can say now is TRY it - try focussing your work the other way round - I'm confident you'll find it makes life vastly easier and more productive. Defining what it does is just as essential for the designer as for the consumer. Focusing on

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Benjamin Goertzel
In particular, emotions seem necessary (in humans) to a) provide goals, b) provide pre-programmed constraints (for when logical reasoning doesn't have enough information), and c) enforce urgency. Agreed. But I think that much of the particular flavor of emotions in humans comes from their

Re: [agi] The role of incertainty

2007-05-01 Thread Pei Wang
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote: The difficulty here is that the problems to be solved by an AI or AGI machine are NOT accepted, well-defined. We cannot just take Pei's NARS, say, or NOvaemnte, and say well obviously it will apply to all these different kinds of problems. No

Re: [agi] The role of incertainty

2007-05-01 Thread Mike Tintner
No, I keep saying - I'm not asking for the odd narrowly-defined task - but rather defining CLASSES of specific problems that your/an AGI will be able to tackle. Part of the definition task should be to explain how if you can solve one kind of problem, then you will be able to solve other

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Jiri Jelinek
emotions.. to a) provide goals.. b) provide pre-programmed constraints, and c) enforce urgency. Our AI = our tool = should work for us = will get high level goals (+ urgency info and constraints) from us. Allowing other sources of high level goals = potentially asking for conflicts. For

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote: No, I keep saying - I'm not asking for the odd narrowly-defined task - but rather defining CLASSES of specific problems that your/an AGI will be able to tackle. Well, we have thought a lot about -- virtual agent control in simulation worlds

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel
On 5/1/07, Benjamin Goertzel [EMAIL PROTECTED] wrote: On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote: No, I keep saying - I'm not asking for the odd narrowly-defined task - but rather defining CLASSES of specific problems that your/an AGI will be able to tackle. Well, we have thought

Re: [agi] The role of incertainty

2007-05-01 Thread Mike Tintner
I think if you look at the history of most industries, you'll find that it often takes a long time for them to move from becoming producer-centric to consumer-centric. [There are some established terms for this, wh. I've forgotten]. When making things people are often first preoccupied with

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Mark Waser
emotions.. to a) provide goals.. b) provide pre-programmed constraints, and c) enforce urgency. Our AI = our tool = should work for us = will get high level goals (+ urgency info and constraints) from us. Allowing other sources of high level goals = potentially asking for conflicts. For

Re: [agi] The role of incertainty

2007-05-01 Thread Josh Treadwell
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote: No, I keep saying - I'm not asking for the odd narrowly-defined task - but rather defining CLASSES of specific problems that your/an AGI will be able to tackle. Part of the definition task should be to explain how if you can solve one kind of

Re: [agi] The role of incertainty

2007-05-01 Thread Pei Wang
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote: As I said to Ben, the crucial cultural background here is that intelligence and creativity have not been properly defined in any sphere. There is no consensus about types of problems, about the difference between AI and AGI, or, more crucially,

Re: [agi] Pure reason is a disease.

2007-05-01 Thread J. Storrs Hall, PhD.
On Tuesday 01 May 2007 14:06, Benjamin Goertzel wrote: In particular, emotions seem necessary (in humans) to a) provide goals, b) provide pre-programmed constraints (for when logical reasoning doesn't have enough information), and c) enforce urgency. ... So, IMO, it becomes a toss-up,

Re: [agi] rule-based NL system

2007-05-01 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: Not really. Semantics is an easier problem. If so, then why When you write a compiler, you develop it in this order: lexical, syntax, semantics. To point out the difference from the way children learn language: lexical, semantics, syntax. This is

Re: [agi] rule-based NL system

2007-05-01 Thread Mark Waser
Hmmm. I think there's a problem with your use of the word semantics . . . . There is a huge difference between labelling an object, which young children do quite early, and dealing with concepts (even fairly concrete ones). There is an even larger difference between correlating

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel
I'm saying you do have to define what your AGI will do - but define it as a tree - 1) a general class of problems - supported by 2) examples of specific types of problem within that class. I'm calling for something different to the traditional alternatives here. I doubt that anyone is doing

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Russell Wallace
On 5/1/07, Jiri Jelinek [EMAIL PROTECTED] wrote: Our AI = our tool = should work for us = will get high level goals (+ urgency info and constraints) from us. Allowing other sources of high level goals = potentially asking for conflicts. For sub-goals, AI can go with reasoning. Yep.

Re: [agi] The role of incertainty

2007-05-01 Thread Mike Tintner
Well, you see I think only the virtual agent problems are truly generalisable. The others it strikes me, haven't got a hope of producing AGI, and are actually narrow. But as I said, the first can probably be generalised in terms of agents seeking goals within problematic environments - and you

Re: [agi] The role of incertainty

2007-05-01 Thread Benjamin Goertzel
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote: Well, you see I think only the virtual agent problems are truly generalisable. The others it strikes me, haven't got a hope of producing AGI, and are actually narrow. I think they are all generalizable in principle, but the virtual agents

Re: [agi] Pure reason is a disease.

2007-05-01 Thread Jiri Jelinek
Mark, I understand your point but have an emotional/ethical problem with it. I'll have to ponder that for a while. Try to view our AI as an extension of our intelligence rather than purely-its-own-kind. For humans - yes, for our artificial problem solvers - emotion is a disease. What if

Re: [agi] rule-based NL system

2007-05-01 Thread Matt Mahoney
--- Mark Waser [EMAIL PROTECTED] wrote: Hmmm. I think there's a problem with your use of the word semantics . . . . There is a huge difference between labelling an object, which young children do quite early, and dealing with concepts (even fairly concrete ones). There is an even