You're mostly correct about the word symbols (barring onomatopoeic words
such as bang hum clipclop boom hiss howl screech fizz murmur clang buzz
whine tinkle sizzle twitter as well as prefixes, suffixes and derived
wordforms which all allow one to derive some meaning).
However you are NOT correct
we already have programming languages. we want computers to
understand natural language because we think: if you know the syntax,
the semantics follow easily. you still need the code to process
the object the text are about. so it will always be a crippled NL understanding
without general
MD:What does warm look like? How about angry or happy? Can you
draw a picture of abstract or indeterminate? I understand (i
think) where you are coming from, and I agree wholeheartedly - up to
the point where you seem to imply that a picture of something is the
totality of its character. I
You can take NARS (http://nars.wang.googlepages.com/) as an example,
starting at http://nars.wang.googlepages.com/wang.logic_intelligence.pdf
Pei
On 5/1/07, rooftop8000 [EMAIL PROTECTED] wrote:
It seems a lot of posts on this list are about the properties an AGI
should have. PLURALISTIC,
Pei,
Glad to see your input. I noticed NARS quite by accident many years ago
remembered it as pos. v. important.
You certainly are implementing the principles we have just been discussing -
which is exciting.
However, reading your papers Ben's, it's becoming clear that there may
well be
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:
Pei,
Glad to see your input. I noticed NARS quite by accident many years ago
remembered it as pos. v. important.
You certainly are implementing the principles we have just been discussing -
which is exciting.
However, reading your papers
Mike Tintner writes:
It goes ALL THE WAY. Language is backed by SENSORY images - the whole
range.
ALL your assumptions about how language can't be cashed out by images and
graphics will be similarly illiterate - or, literally, UNIMAGINATIVE.
I don't doubt that the visual and other sensory
However, reading your papers Ben's, it's becoming clear that there may
well be an industry-wide bad practice going on here. You guys all focus on
how your systems WORK... The first thing anyone trying to understand
your
or any other system must know is what does it DO? What are the problems
capitalism complement democracy- it took your brain 13-20 years to be able
to understand the above sentence. Much much more than it takes a child to
understand blue and red look nice together... [blue complements red]. Your
brain had to build up a vast relevant picture tree to understand that
Mike Tintner writes:
And.. by now you should get the idea.
And the all-important thing here is that if you want to TEST or question
the above sentence, the only way to do it successfully is to go back and
look at the reality. If you wanted to argue, well look at China, they're
rocketing
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:
Define the type of problems it addresses which might be [for all I know]
*understanding and precis-ing a set of newspaper stories about politics or
sewage
*solving a crime of murder - starting with limited evidence
*designing new types of
On 01/05/07, Mike Tintner [EMAIL PROTECTED] wrote:
There is no choice about all this. You do not have an option to have a pure
language AGI - if you wish any brain to understand the world, and draw
further connections about the world, it HAS to operate with graphics and
images. Period.
Plato's
From the Boston Globe
(http://www.boston.com/news/education/higher/articles/2007/04/29/hearts__minds/?page=full)
Antonio Damasio, a neuroscientist at USC, has played a pivotal role in
challenging the old assumptions and establishing emotions as an important
scientific subject. When Damasio
Well, this tells you something interesting about the human cognitive
architecture, but not too much about intelligence in general...
I think the dichotomy btw feeling and thinking is a consequence of the
limited reflective capabilities of the human brain... I wrote about this in
The Hidden
Bob Mottram writes:
When you're reading a book or an email I think what you're doing is
tieing your internal simulation processes to the stream of words
Then it would be crucial to understand these simulation processes.
For some very visual things I think I can follow what I think you
are
Bob Mottram wrote:
On 01/05/07, Mike Tintner [EMAIL PROTECTED] wrote:
There is no choice about all this. You do not have an option to have a
pure
language AGI - if you wish any brain to understand the world, and draw
further connections about the world, it HAS to operate with graphics and
To elaborate a bit:
It seems likely to me that our minds work with the
mechanisms of perception when appropriate -- that
is, when the concepts are not far from sensory
modalities. This type of concept is basically all
that animals have and is probably most of what
we have.
Somehow, though, we
On 5/1/07, Peter Voss [EMAIL PROTECTED] wrote:
Pei does research (great stuff, I might add). I personally think it a pity
that his approach is not part of any development project.
Peter: thanks for the comment, though I do consider myself as doing
development all the time --- as proof of
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:
Well, that really frustrates me. You just can't produce a machine that's
going to work, unless you start with its goal/function.
I think you are making an error of projecting the methodologies that are
appropriate for narrow-purpose-specific
The conclusion of that debate was that (a) images definitely play a role
in intelligence, and (b) non-imagistic (propositional) entities also
definitely play a role in intelligence, and (c) it is difficult to be
sure whether there are two separate kinds of representation or one kind
that can
Well, this tells you something interesting about the human cognitive
architecture, but not too much about intelligence in general...
How do you know that it doesn't tell you much about intelligence in general?
That was an incredibly dismissive statement. Can you justify it?
I think the
On 01/05/07, DEREK ZAHN [EMAIL PROTECTED] wrote:
what exactly do you think my internal simulation processes might
be doing when I read the following sentence from your email?
In short, imagery from visual, acoustic and other sensory modalities
give life through simulation to the basic skeletal
IN the final analysis, Ben, you're giving me excuses rather than solutions.
Your pet control program is a start - at least I have a vague, still v. vague
idea of what you might be doing.
You could (I'm guessing) say : this AGI is designed to control a pet which will
have to solve adaptive
i meant programs that reason about the code you give them.
but never mind
--- Mark Waser [EMAIL PROTECTED] wrote:
we want computers to
understand natural language because we think: if you know the syntax,
the semantics follow easily
Huh? We don't think anything of the sort. Syntax is
i meant programs that reason about the code you give them.
I did too. If a program can reason like that, unless it only works in a
very small domain, you've created AGI.
- Original Message -
From: rooftop8000 [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Tuesday, May 01, 2007
--- Mark Waser [EMAIL PROTECTED] wrote:
we want computers to
understand natural language because we think: if you know the syntax,
the semantics follow easily
Huh? We don't think anything of the sort. Syntax is relatively easy.
Semantics are AGI.
Not really. Semantics is an easier
Bob Mottram writes:
Some things can be not so long as others.
...
Thanks for taking the time for such in-depth descriptions, but I am still
not clear what you are getting at. Much of what you write is a
context in which the meaning of a term might have been learned,
sometimes with multiple
Not really. Semantics is an easier problem.
If so, then why When you write a compiler, you develop it in this order:
lexical, syntax, semantics.
Information retrieval and text
classification systems work pretty well by ignoring word order.
Semantics is defined as the study of meaning.
P.S. This is a truly weird conversation. It's like you're saying..Hell
it's a box, why should I have to tell you what my box does? Only insiders
care what's inside the box. The rest of the world wants to know what it does
- and that's the only way they'll buy it and pay attention to it - and
On 5/1/07, Mark Waser [EMAIL PROTECTED] wrote:
Well, this tells you something interesting about the human cognitive
architecture, but not too much about intelligence in general...
How do you know that it doesn't tell you much about intelligence in
general? That was an incredibly dismissive
My point, in that essay, is that the nature of human emotions is rooted in
the human brain architecture,
I'll agree that human emotions are rooted in human brain architecture but
there is also the question -- is there something analogous to emotion which is
generally necessary for
On 5/1/07, Mark Waser [EMAIL PROTECTED] wrote:
I'll agree that human emotions are rooted in human brain architecture
but there is also the question -- is there something analogous to emotion
which is generally necessary for *effective* intelligence? My answer is a
qualified but definite
Nah, analogy doesn't quite work - though could be useful.
An engine is used to move things... many different things - wheels, levers,
etc. So if you've got an engine that is twenty times more powerful, sure you
don't need to tell me what particular things it is going to move. It's
generally
Not much point in arguing further here - all I can say now is TRY it - try
focussing your work the other way round - I'm confident you'll find it makes
life vastly easier and more productive. Defining what it does is just as
essential for the designer as for the consumer.
Focusing on
In particular, emotions seem necessary (in humans) to a) provide goals,
b) provide pre-programmed constraints (for when logical reasoning doesn't
have enough information), and c) enforce urgency.
Agreed.
But I think that much of the particular flavor of emotions in humans comes
from their
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:
The difficulty here is that the problems to be solved by an AI or AGI
machine are NOT accepted, well-defined. We cannot just take Pei's NARS, say,
or NOvaemnte, and say well obviously it will apply to all these different
kinds of problems. No
No, I keep saying - I'm not asking for the odd narrowly-defined task - but
rather defining CLASSES of specific problems that your/an AGI will be able to
tackle. Part of the definition task should be to explain how if you can solve
one kind of problem, then you will be able to solve other
emotions.. to a) provide goals.. b) provide pre-programmed constraints, and
c) enforce urgency.
Our AI = our tool = should work for us = will get high level goals (+
urgency info and constraints) from us. Allowing other sources of high level
goals = potentially asking for conflicts. For
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:
No, I keep saying - I'm not asking for the odd narrowly-defined task -
but rather defining CLASSES of specific problems that your/an AGI will be
able to tackle.
Well, we have thought a lot about
-- virtual agent control in simulation worlds
On 5/1/07, Benjamin Goertzel [EMAIL PROTECTED] wrote:
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:
No, I keep saying - I'm not asking for the odd narrowly-defined task -
but rather defining CLASSES of specific problems that your/an AGI will be
able to tackle.
Well, we have thought
I think if you look at the history of most industries, you'll find that it
often takes a long time for them to move from becoming producer-centric to
consumer-centric. [There are some established terms for this, wh. I've
forgotten].
When making things people are often first preoccupied with
emotions.. to a) provide goals.. b) provide pre-programmed constraints, and
c) enforce urgency.
Our AI = our tool = should work for us = will get high level goals (+ urgency
info and constraints) from us. Allowing other sources of high level goals =
potentially asking for conflicts. For
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:
No, I keep saying - I'm not asking for the odd narrowly-defined task -
but rather defining CLASSES of specific problems that your/an AGI will be
able to tackle. Part of the definition task should be to explain how if you
can solve one kind of
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:
As I said to Ben, the crucial cultural background here is that intelligence
and creativity have not been properly defined in any sphere. There is no
consensus about types of problems, about the difference between AI and AGI,
or, more crucially,
On Tuesday 01 May 2007 14:06, Benjamin Goertzel wrote:
In particular, emotions seem necessary (in humans) to a) provide goals,
b) provide pre-programmed constraints (for when logical reasoning doesn't
have enough information), and c) enforce urgency.
...
So, IMO, it becomes a toss-up,
--- Mark Waser [EMAIL PROTECTED] wrote:
Not really. Semantics is an easier problem.
If so, then why When you write a compiler, you develop it in this order:
lexical, syntax, semantics.
To point out the difference from the way children learn language: lexical,
semantics, syntax.
This is
Hmmm. I think there's a problem with your use of the word semantics . .
. . There is a huge difference between labelling an object, which young
children do quite early, and dealing with concepts (even fairly concrete
ones). There is an even larger difference between correlating
I'm saying you do have to define what your AGI will do - but define it as
a
tree - 1) a general class of problems - supported by 2) examples of
specific types of problem within that class. I'm calling for something
different to the traditional alternatives here.
I doubt that anyone is doing
On 5/1/07, Jiri Jelinek [EMAIL PROTECTED] wrote:
Our AI = our tool = should work for us = will get high level goals (+
urgency info and constraints) from us. Allowing other sources of high level
goals = potentially asking for conflicts. For sub-goals, AI can go with
reasoning.
Yep.
Well, you see I think only the virtual agent problems are truly generalisable.
The others it strikes me, haven't got a hope of producing AGI, and are actually
narrow.
But as I said, the first can probably be generalised in terms of agents seeking
goals within problematic environments - and you
On 5/1/07, Mike Tintner [EMAIL PROTECTED] wrote:
Well, you see I think only the virtual agent problems are truly
generalisable. The others it strikes me, haven't got a hope of producing
AGI, and are actually narrow.
I think they are all generalizable in principle, but the virtual agents
Mark,
I understand your point but have an emotional/ethical problem with it. I'll
have to ponder that for a while.
Try to view our AI as an extension of our intelligence rather than
purely-its-own-kind.
For humans - yes, for our artificial problem solvers - emotion is a
disease.
What if
--- Mark Waser [EMAIL PROTECTED] wrote:
Hmmm. I think there's a problem with your use of the word semantics . .
. . There is a huge difference between labelling an object, which young
children do quite early, and dealing with concepts (even fairly concrete
ones). There is an even
53 matches
Mail list logo