--- On Mon, 10/20/08, Dr. Matthias Heger <[EMAIL PROTECTED]> wrote: > Conceptual framework is not well defined. Therefore I > can't agree or > disagree. > What do you mean with causal model?
A conceptual framework starts with knowledge representation. Thus a symbol S refers to a persistent pattern P which is, in some way or another, a reflection of the agent's environment and/or a composition of other symbols. Symbols are related to each other in various ways. These relations (such as, "is a property of", "contains", "is associated with") are either given or emerge in some kind of self-organizing dynamic. A causal model M is a set of symbols such that the activation of symbols S1...Sn are used to infer the future activation of symbol S'. The rules of inference are either given or emerge in some kind of self-organizing dynamic. A conceptual framework refers to the whole set of symbols and their relations, which includes all causal models and rules of inference. Such a framework is necessary for language comprehension because meaning is grounded in that framework. For example, the word 'flies' has at least two totally distinct meanings, and each is unambiguously evoked only when given the appropriate conceptual context, as in the classic example "time flies like an arrow; fruit flies like a banana." "time" and "fruit" have very different sets of relations to other patterns, and these relations can in principle be employed to disambiguate the intended meaning of "flies" and "like". If you think language comprehension is possible with just statistical methods, perhaps you can show how they would work to disambiguate the above example. > In this example we observe two phenomena: > 1. primitive language compared to all modern languages > 2. and as a people they exhibit barely any of the hallmarks > of abstract > reasoning > > From this we can neither conclude that 1 causes 2 nor that > 2 causes 1. OK, let's look at all 3 cases: 1. Primitive language *causes* reduced abstraction faculties 2. Reduced abstraction faculties *causes* primitive language 3. Primitive language and reduced abstraction faculties are merely correlated; neither strictly causes the other I've been arguing for (1), saying that language and intelligence are inseparable (for social intelligences). The sophistication of one's language bounds the sophistication of one's conceptual framework. In (2), one must be saying with the Piraha that they are cognitively deficient for another reason, and their language is primitive as a result of that deficiency. Professor Daniel Everett, the anthropological linguist who first described the Piraha grammar, dismissed this possibility in his paper "Cultural Constraints on Grammar and Cognition in Piraha˜" (see http://www.eva.mpg.de/psycho/pdf/Publications_2005_PDF/Commentary_on_D.Everett_05.pdf): "... [the idea that] the Piraha˜ are sub- standard mentally—is easily disposed of. The source of this collective conceptual deficit could only be ge- netics, health, or culture. Genetics can be ruled out because the Piraha˜ people (according to my own ob- servations and Nimuendajú’s have long intermarried with outsiders. In fact, they have intermarried to the extent that no well-defined phenotype other than stat- ure can be identified. Piraha˜s also enjoy a good and varied diet of fish, game, nuts, legumes, and fruits, so there seems to be no dietary basis for any inferiority. We are left, then, with culture, and here my argument is exactly that their grammatical differences derive from cultural values. I am not, however, making a claim about Piraha˜ conceptual abilities but about their expression of certain concepts linguistically, and this is a crucial difference." This quote thus also addresses (3), that the language and the conceptual deficiency are merely correlated. Everett seems to be arguing for this point, that their language and conceptual abilities are both held back by their culture. There are questions about the dynamic between culture and language, but that's all speculative. I realize this leaves the issue unresolved. I include it because I raised the Piraha example and it would be disingenuous of me to not mention Everett's interpretation. > >>> > I'm saying that if an AI understands & speaks > natural language, you've > solved AGI - your Nobel will be arriving soon. > <<< > > This is just your opinion. I disagree that natural language > understanding > necessarily implies AGI. For instance, I doubt that anyone > can prove that > any system which understands natural language is > necessarily able to solve > the simple equation x *3 = y for a given y. > And if this is not proven then we shouldn't assume that > natural language > understanding without hidden further assumptions implies > AGI. Of course, but our opinions have consequences, and in debating the consequences we may arrive at a situation in which one of our positions appears absurd, contradictory, or totally improbable. That is why we debate about what is ultimately speculative, because sometimes we can show the falsehood of a position without empirical facts. On to your example. The ability to do algebra is hardly a test of general intelligence, as software like Mathematica can do it. One could say that the ability to be *taught* how to do algebra reflects general intelligence, but again, that involves learning the *language* of mathematical formalism. > >>> > The difference between AI1 that understands Einstein, and > any AI currently > in existence, is much greater then the difference between > AI1 and Einstein. > <<< > > This might be true but what does this show? Just that natural language is hard. Obviously we disagree on that. > >>> > Sorry, I don't see that, can you explain the proof? > Are you saying that > sign language isn't natural language? That would be > patently false. (see > http://crl.ucsd.edu/signlanguage/) > <<< > > Yes. In my opinion, sign language is no natural language as > it is usually > understood. So the documented emergence of a totally new sign language among an isolated deaf community is somehow not natural? See http://en.wikipedia.org/wiki/Nicaraguan_Sign_Language (that's a different link from the last one) for an example of a sign language that developed from a pidgin to a creole in exactly the same way as spoken languages. > >>> > So you're agreeing that language is necessary for > self-reflectivity. In your > models, then, self-reflectivity is not important to AGI, > since you say AGI > can be realized without language, correct? > <<< > > No. Self-reflectifity needs just a feedback loop for own > processes. I do > not say that AGI can be realized without language. AGI must > produce outputs > and AGI must obtain inputs. For inputs and outputs there > must be protocols. > These protocols are not fixed but depend on the input > devices on output > devices. For instance the AGI could use the hubble > telescope or a microscope > or both. > For the domain of mathematics a formal language which is > specified by humans > would be > the best for input and output. Agreed. > >>> > I'm not saying that language is inherently involved in > thinking, but it is > crucial for the development of *sophisticated* causal > models of the world - > the kind of models that can support self-reflectivity. > Word-concepts form > the basis of abstract symbol manipulation. > > That gets the ball rolling for humans, but the conceptual > framework that > emerges is not necessarily tied to linguistics, especially > as humans get > feedback from the world in ways that are not linguistic > (scientific > experimentation/tinkering, studying math, art, music, etc). > <<< > > That is just your opinion again. I tolerate your opinion. > But I have a > different opinion. The future will show which approach is > successful. > > - Matthias I think there is a lot more evidence for the idea that language and intelligence are integrated than for the idea that they're not. I think all of the examples you've used to illustrate your points about language involve data transfers by dumb computers based on predetermined protocols. That is the narrowest domain you could possibly talk about - no intelligence whatsoever is required in that contrived situation. An AGI needs to be competent (solve problems) in novel domains, which means learning new protocols for understanding and action. That's the crux of general intelligence, and I don't think you can ignore the *learning* of language, which you've admitted is hard. Terren __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com ------------------------------------------- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: https://www.listbox.com/member/?member_id=8660244&id_secret=117534816-b15a34 Powered by Listbox: http://www.listbox.com