You talked mainly about how sentences require vast amounts of external knowledge to interpret, but it does not imply that those sentences cannot be represented in (predicate) logical form. I think there should be a working memory in which sentences under attention would "bring up" other sentences by association. For example if "a person is being kicked" is in working memory, that fact would bring up other facts such as "being kicked causes a person to feel pain and possibly to get angry", etc. All this is orthogonal to *how* the facts are represented.
What you have described is how facts in working memory invoke other facts, to form a complex scenario. This is what classical AI calls "frames", I call it working memory. As Ben pointed out, one of the major challenges in AGI is how to control vast amounts of facts that follow from or associate with the current facts, YY ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?list_id=303