Terren wrote

>>>
Language understanding requires a sophisticated conceptual framework
complete with causal models, because, whatever "meaning" means, it must be
captured somehow in an AI's internal models of the world.
<<<

Conceptual framework is not well defined. Therefore I can't agree or
disagree.
What do you mean with causal model?


>>>
The Piraha tribe in the Amazon basin has a very primitive language compared
to all modern languages - it has no past or future tenses, for example - and
as a people they exhibit barely any of the hallmarks of abstract reasoning
that are so common to the rest of humanity, such as story-telling, artwork,
religion... see http://en.wikipedia.org/wiki/Pirah%C3%A3_people. 


How do you explain that?
<<<

In this example we observe two phenomena:
1. primitive language compared to all modern languages
2. and as a people they exhibit barely any of the hallmarks of abstract
reasoning

>From this we can neither conclude that 1 causes 2 nor that 2 causes 1.


>>>
I'm saying that if an AI understands & speaks natural language, you've
solved AGI - your Nobel will be arriving soon.  
<<<

This is just your opinion. I disagree that natural language understanding
necessarily implies AGI. For instance, I doubt that anyone can prove that
any system which understands natural language is necessarily able to solve
the simple equation x *3 = y for a given y.
And if this is not proven then we shouldn't assume that natural language
understanding without hidden further assumptions implies AGI.


>>>
The difference between AI1 that understands Einstein, and any AI currently
in existence, is much greater then the difference between AI1 and Einstein.
<<<

This might be true but what does this  show?



>>>
Sorry, I don't see that, can you explain the proof?  Are you saying that
sign language isn't natural language?  That would be patently false. (see
http://crl.ucsd.edu/signlanguage/)
<<<

Yes. In my opinion, sign language is no natural language as it is usually
understood.



>>>
So you're agreeing that language is necessary for self-reflectivity. In your
models, then, self-reflectivity is not important to AGI, since you say AGI
can be realized without language, correct?
<<<

No. Self-reflectifity needs just a feedback loop for  own processes. I do
not say that AGI can be realized without language. AGI must produce outputs
and AGI must obtain inputs. For inputs and outputs there must be protocols.
These protocols are not fixed but  depend on the input devices on output
devices. For instance the AGI could use the hubble telescope or a microscope
or both. 
For the domain of mathematics a formal language which is specified by humans
would be 
the best for input and output. 

>>>
I'm not saying that language is inherently involved in thinking, but it is
crucial for the development of *sophisticated* causal models of the world -
the kind of models that can support self-reflectivity. Word-concepts form
the basis of abstract symbol manipulation.

That gets the ball rolling for humans, but the conceptual framework that
emerges is not necessarily tied to linguistics, especially as humans get
feedback from the world in ways that are not linguistic (scientific
experimentation/tinkering, studying math, art, music, etc).
<<<

That is just your opinion again. I tolerate your opinion. But I have a
different opinion. The future will show which approach is successful.

- Matthias



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com

Reply via email to