Mark Waser wrote:
Interesting. I believe that we have a fundamental disagreement. I would argue that the semantics *don't* have to be distributed. My argument/proof would be that I believe that *anything* can be described in words -- and that I believe that previous narrow AI are brittle because they don't have both a) closure over the terms that they use and b) the ability to learn the meaning if *any* new term (traits that I believe that humans have -- and I'm not sure at all that the "intelligent" part of humans have distributed semantics). Of course, I'm also pretty sure that my belief is in the minority on this list as well.

I believe that an English system with closure and learning *is* going to be a complex system and can be grounded (via the closure and interaction with the real world). And scalable looks less problematic to me with symbols than without.

We may be different enough in (hopefully educated) opinions that this e-mail may not allow for a response other than "We shall see" but I would be interested, if you would, in hearing more as to why you believe that semantics *must* be distributed (though I will immediately concede that it will make them less hackable).

Trust you to ask a difficult question ;-).

I'll just say a few things (leaving more detail for some big fat technical paper in the future).

1) On the question of how *much* the semantics would be distributed: I don't want to overstate my case, here. The extent to which they would be distributed will be determined by how the system matures, using its learning mechanisms. What that means is that my chosen learning mechanisms, when they are fully refined, could just happen to create a system in which the atomic concepts were mostly localized, but with a soupcon of distributedness. Or it could go the other way, and the concept of "chair" (say) could be distributed over a thousand pesky concept-fragments and their connections. I am to some extent agnostic about how that will turn out. (So it may turn out that we are not so far apart, in the end).

2) But having said that, I think that it would be surprising if a tangled system of atoms and learning mechanisms were to result in something that looked like it had the modular character of a natural language. To me, natural languages look like approximate packaging of something deeper .... and if that 'something' that is deeper were actually modular as well, rather than having a distributed semantics, why doesn't the something stop being shy, come up to the surface, be a proper language itself, and stop pestering me with the feeling that *it* is just an approximation to something deeper?! :-)

(Okay, I said that in a very abstract and roundabout way, but if you get what I am driving at, you might see where I am coming from.)

3) But my real, fundamental reason for believing in distributed semantics is that I am obliged (because of the complex systems problem) to follow a certain methodology, and that methodology will not allow me to make a commitment to a particular semantics ahead of time: just can't do it, because that would be the worst way to fall into the trap of restricting the possible complex systems I can consider. And given that, I just think that an entire localist semantics looks unnatural. Apart from anything else, semanticists can only resolve the problem of the correspondence between atomic terms and things in the world by invoking the most bizarre forms of possible-worlds functions, defined over infinite sets of worlds. I find that a stretch, and a weakness.


Hope that makes sense.



Richard Loosemore








-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=48860497-238484

Reply via email to