On Wed, Apr 3, 2013 at 6:43 PM, Jim Bromer <[email protected]> wrote:

> On Wed, Apr 3, 2013 at 7:31 AM, Andrew G. Babian <[email protected]>wrote:
>
>> ... it's very clear to me that language cannot be the bottom or basis of
>> representation.  A language system has to be a piece on top of the basic
>> system.  It may be the most important piece to us, because for interaction
>> with us, and ability to use our body of written knowledge and contribute to
>> it, a system will need to use language.  But, that need in no way implies
>> that you could ever get any intelligent behavior if you just start at the
>> level of language.
>>
>

>
>> Our computers presently use symbolic language to do all sorts of
> intelligent things and as time passes they can do more and more.  Are you
> saying that language cannot be the basis of representation for AGI?
>
>
Isn't the internet a lovely place to repeat oneself! As I remarked in other
ramblings, language is brilliant for most things and should be very close
to the bottom of things - we do not unfairly or irresponsibly talk of a
film's or composer's or painter's "vocabulary". Indeed, any reasonable
degree of mastery over any domain comes with its own linguistics and we
shouldn't shy away from the fact that ordinary human languages represent a
victory of the species, a considerable mastery over space-time, psychology,
biology and zoology at least. In all these domains- and more - considerable
magic happens between the "cracks" of reality, the gaps and overlaps of
symbols, rules, interactions etc. If it's dancing we are talking about then
performing flawless jumps and turns and drops is not enough, the flow from
one to another has to be managed as well, the rhythm etc. When trying to
learn by reading/listening, your mileage will vary depending on the ability
to ask questions and suggest rephrasings that will appropriately generalize
or specialize the discourse. Of course I am a big fan of embodiment where
the last bit of disambiguation happens, the reality check!

Now, what would be the meaning of "starting at the level of language",
would it be some kind of PureEnglish in which the state-of-the-world would
be described, and then continuously refined with observations and
experiments? Let's tie this down a bit, it would have to be the
state-of-the-microworld, the state of the 10, 50, 1000 entities and 10
people I know well, right? And it would have to be probabilistic because it
would not be terribly intelligent to assume I know what state of mind my
long-lost friend is in, and it would be very naive to believe that the
bicycle I left in a corner some days ago will be in the state I left it for
much longer. So, immediately I am given an opportunity to either encode the
world in ProbabilisticEnglish or perhaps in PureEnglish with probabilities
added on top. Is this so impossibly clear and clearly impossible? I find it
a very plausible way to go about things. Some other representations that
come to mind like a sparse 3d or 4d matrix that holds information about the
materials in that space (or the molecules, or the superstrings) sound
terribly incompressible and uncomputable. OK, maybe you can do smarter
things but language sounds terribly smart already.

Now, I will try to recap my position: a learning or intelligent system is
anything and everything but -- a human like intellect will need decent
modelling of the human mind, mamallian locomotion and everyday objects and
physics, and most importantly the transitions of these things. As much as I
am not a fan of OO programming, I don't see reasoning happening without
objects, object histories and person histories(and personalities of
course). Ontologies should be built around these objects for extra
productivity, and everything has to be kept somewhat flexible, whether
fairies exist or not we know not, same for gravitons, our sensors will not
have unlimited reliability and our interaction with sentient beings will
include the systemic risks of lies and inaccuracies. Of course all this
flexibility adds problem-space dimensionality and the real point of
intelligence is to navigate without ending up in loops and dead ends. But
will it be a problem if object histories are described in an English
subset? I don't think so. It would be more of a problem if the proto-AGI
was misbehaving and we lacked linguistic clues to debug it, for example if
it was contemplating the prisoner's dilemma and fell into infinite loops.

For all I know the various "failed" linguistic systems never tried to enter
this high-dimensional probabilistic world - what is the point of story
understanding (or CYC) if you can't factor in the probability that the
opening sentence in a book is actually a spoon talking, especially if
you've paid an engineer to code logical rules like "only humans talk". So,
in my mind the problem was not language but dimensionality, more
specifically dimensionality avoidance!

AT



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to