Matthias,

I take the point that there is vastly more to language understanding than the surface processing of words as opposed to concepts.

I agree that it is typically v. fast.

I don't think though that you can call any concept a "pattern". On the contrary, a defining property of concepts, IMO, is that they resist reduction to any pattern or structure - which is rather important, since my impression is most AGI-ers live by patterns/structures. Even a concept like "triangle" cannot actually be reduced to a pattern. Try it, if you wish.

And the issue of conceptualisation - of what a concept consists of - is manifestly an unsolved problem for both cog sci and AI and of utmost, central importance for AGI. We have to understand how the brain performs its feats here, because that, at a rough general level, is almost certainly how it will *have* to be done. (I can't resist being snide here and saying that since this an unsolved problem, one can virtually guarantee that AGI-ers will therefore refuse to discuss it).

Trying to work out what information the brain handles, for example, when it talks about

THE US IS THE HOME OF THE FINANCIAL CRISIS

- what passes - and has to pass - through a mind thinking specifically of "the financial crisis"?- is in some ways as great a challenge as working out what the brain's engrams consist of. Clearly it won't be the kind of mere, symbolic, dictionary processing that some AGI-ers envisage.

It will be perhaps as complex as the conceptualisation of "party" in:

HOW WAS THE PARTY LAST NIGHT?

where a single word may be used to "touch upon" over, say, two hours of sensory, "movie" experience in the brain.

I partly disagree with you about how we should study all this - it is vital to look at how we understand, or rather fail to understand and get confused by concepts and language - which happens all the time. This can tell us a great deal about what is going on underneath.


Matthias:
For the discussion of the subject the details of the pattern representation
are not important at all. It is sufficient if you agree that a spoken
sentence represent a certain set of patterns which are translated into the
sentence. The receiving agent retranslates the sentence and matches the
content with its model by activating similar patterns.

The activation of patterns is extremely fast and happens in real time. The
brain even predicts patterns if it just hears the first syllable of a word:

http://www.rochester.edu/news/show.php?id=3244

There is no creation of new patterns and there is no intelligent algorithm
which manipulates patterns. It is just translating, sending, receiving and
retranslating.

From the ambiguities of natural language you obtain some hints about the
structure of the patterns. But you cannot even expect to obtain all detail
of these patterns by understanding the process of language understanding.
There will be probably many details within these patterns which are only
necessary for internal calculations.
These details will be not visible from the linguistic point of view. Just
think about communicating computers and you will know what I mean.


- Matthias

Mike Tintner [mailto:[EMAIL PROTECTED] wrote:

Matthias,

You seem - correct me - to be going a long way round saying that words are
different from concepts - they're just sound-and-letter labels for concepts,

which have a very different form. And the processing of words/language is
distinct from and relatively simple compared to the processing of the
underlying concepts.

So take

THE CAT SAT ON THE MAT

or

THE MIND HAS ONLY CERTAIN PARTS WHICH ARE SENTIENT

or

THE US IS THE HOME OF THE FINANCIAL CRISIS

the words "c-a-t" or "m-i-n-d" or "U-S" or "f-i-n-a-n-c-i-a-l c-r-i-s-i-s"
are distinct from the underlying concepts. The question is: What form do
those concepts take? And what is happening in our minds (and what has to
happen in any mind) when we process those concepts?

You talk of "patterns". What patterns, do you think, form the concept of
"mind" that are engaged in thinking about sentence 2? Do you think that
concepts like "mind" or "the US" might involve something much more complex
still? "Models"? Or is that still way too simple? "Spaces"?

Equally, of course, we can say that each *sentence* above is not just a
"verbal composition" but a "conceptual composition" - and the question then
is what form does such a composition take? Do sentences form, say, a
"pattern of patterns",  or something like a "picture"? Or a "blending of
spaces" ?

Or are concepts like *money*?

YOU CAN BUY A LOT WITH A MILLION DOLLARS

Does every concept function somewhat like money, e.g. "a million dollars" - something that we know can be cashed in, in an infinite variety of ways, but

that we may not have to start "cashing in,"  (when processing), unless
really called for - or only cash in so far?

P.S. BTW this is the sort of psycho-philosophical discussion that I would
see as central to AGI, but that most of you don't want to talk about?





Matthias: What the computer makes with the data it receives depends on the
information
of the transferred data, its internal algorithms and its internal data.
This is the same with humans and natural language.


Language understanding would be useful to teach the AGI with existing
knowledge already represented in natural language. But natural language
understanding suffers from the problem of ambiguities. These ambiguities
can
be solved by having similar knowledge as humans have. But then you have a
recursive problem because first there has to be solved the problem to
obtain
this knowledge.

Nature solves this problem with embodiment. Different people make similar
experiences since the laws of nature do not depend on space and time.
Therefore we all can imagine a dog which is angry. Since we have
experienced
angry dogs but we haven't experienced angry trees we can resolve the
linguistic ambiguity of my former example and answer the question: Who was
angry?

The way to obtain knowledge with embodiment is hard and long even in
virtual
worlds.
If the AGI shall understand natural language it would be necessary that it makes similar experiences as humans make in the real world. But this would
need a very very sophisticated and rich virtual world. At least, there
have
to be angry dogs in the virtual world ;-)

As I have already said I do not think the relation between utility of this
approach and the costs would be positive for first AGI.





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: https://www.listbox.com/member/?&;
Powered by Listbox: http://www.listbox.com





-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to