Josh,

Again a good reply.  So it appears the problem is they don't have good
automatic learning of semantics.

But, of course, that's vertually impossible to do in small systems except,
perhaps, about trivial domains.  It becomes much easier in tera-machines.
So if my interpretation of what you are saying is true, it bodes well for
the ease of overcoming this problem in the coming years with the coming
hardware.

I look forward to reading Pei's article on this subject. It may shed some
new light on my understanding of the subject.  But it may take me some
time.  I read and understand symbolic logic slowly.

Edward W. Porter
Porter & Associates
24 String Bridge S12
Exeter, NH 03833
(617) 494-1722
Fax (617) 494-1822
[EMAIL PROTECTED]



-----Original Message-----
From: J Storrs Hall, PhD [mailto:[EMAIL PROTECTED]
Sent: Thursday, October 04, 2007 4:30 PM
To: agi@v2.listbox.com
Subject: Re: [agi] breaking the small hardware mindset


Let me answer with an anecdote. I was just in the shop playing with some
small
robot motors and I needed a punch to remove a pin holding a gearbox onto
one
of them. I didn't have a purpose-made punch, so I cast around in the
toolbox
until Aha! an object close enough to use. (It was a small rattail file)

Now the file and a true punch have many things in common and many other
things
different. Among the common things that were critical are the fact that
the
hardened steel of the file wouldn't bend and wedge beside the pin, and I
could hammer on the other end of it. These semantic aspects of the file
had
to match the same ones of the punch before I could see it as one.

Where did these semantic aspects come from? Somehow I've learned enough
about
punches and files to know what a punch needs (i.e. which of its properties

are necessary for it to work) and what a file gives.

In Copycat, the idea is to build up an interpretation of an object
(analogy as
perception) under pressures from what it has to match. So far, well and
good -- that's what I was doing. But in Copycat (and tabletop and ...) the

semantics is built in and ad hoc. And there isn't really all that much of
an
analogy net matching algorithm without the semantics (codelets).

In my case, I have lots of experience misusing tools, so I have built up
an
internal theory of which properties are likely to matter and which aren't.


I think this most closely matches your even-numbered points below :-)

Perhaps more succinctly, they have a general purpose representation but
it's
snippets of hand-written lisp code, and no way to automatically generate
more
like it.

Josh

On Thursday 04 October 2007 02:59:38 pm, Edward W. Porter wrote:
> Josh,
>
> (Talking of “breaking the small hardware mindset,” thank god for the
> company with the largest hardware mindset -- or at least the largest
> physical embodiment of one-- Google.  Without them I wouldn’t have
> known what “FARG” meant, and would have had to either (1) read your
> valuable response with less than the understanding it deserves or (2)
> embarrassed myself by admitting ignorance and asking for a
> clarification.)
>
> With regard to your answer, copied below, I thought the answer would
> be something like that.
>
> So which of the below types of “representational problems” are the
> reasons why their basic approach is not automatically extendable?
>
>               1. They have no general purpose representation that can
represent
> almost anything in a sufficiently uniform representational scheme to
> let their analogy net matching algorithm be universally applied
> without requiring custom patches for each new type of thing to be
> represented.
>
>               2. They have no general purpose mechanism for determining
what are
> relevant similarities and generalities across which to allow slippage
> for purposes of analogy.
>
>               3. They have no general purpose mechanism for
> automatically finding which compositional patterns map to which lower
> level representations, and which of those compositional patterns are
> similar to each other in a way appropriate for slippages.
>
>               4. They have no general purpose mechanism for
> automatically determining what would be appropriately coordinated
> slippages in semantic hyperspace.
>
>               5. Some reason not listed above.
>
> I don’t know the answer.  There is no reason why you should.  But if
> you
> -- or any other interested reader –  do, or if you have any good
thoughts
> on the subject, please tell me.
>
> I may be naïve.  I may be overly big-hardware optimistic.  But based
> on the architecture I have in mind, I think a Novamente-type system,
> if it is not already architected to do so, could be modified to handle
> all of these problems (except perhaps 5, if there is a 5) and, thus,
> provide powerful analogy drawing across virtually all domains.
>
> Edward W. Porter
> Porter & Associates
> 24 String Bridge S12
> Exeter, NH 03833
> (617) 494-1722
> Fax (617) 494-1822
> [EMAIL PROTECTED]
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50129778-7b4bfa

Reply via email to