My mistake --- the previous email was meant to be private, though I
was too tired to remember that I shouldn't use "reply". :-(

Anyway, I don't mind to share this paper, but please don't post it on the Web.

Pei

On 10/4/07, Pei Wang <[EMAIL PROTECTED]> wrote:
> Mike,
>
> Attached is the paper (for your personal use only). Comments are welcome.
>
> Pei
>
> On 10/4/07, mike ramsey <[EMAIL PROTECTED]> wrote:
> > If permissible, I to would be interested in the JoETAI version of your
> > paper.
> >
> > Thanks,
> >  Mike Ramsey
> >
> >
> > On 10/4/07, Edward W. Porter < [EMAIL PROTECTED]> wrote:
> > >
> > >
> > >
> > >
> > > In response to Pei Wang's post of 10/4/2007 3:13 PM
> > >
> > > Thanks for giving us a pointer so such inside info.
> > >
> > > Googling for the article you listed I found
> > >
> > >
> > > 1. The Logic of Categorization, by PeiWang at
> > http://nars.wang.googlepages.com/wang.categorization.pdf
> > FOR FREE; and
> > >
> > > 2. A logic of categorization Authors: Wang, Pei; Hofstadter, Douglas;
> > Source: Journal of Experimental & Theoretical Artificial Intelligence,
> > Volume 18, Number 2, June 2006 , pp. 193-213(21) FOR $46.92
> > >
> > > Is the free one roughly as good as the $46.92 one, and, if not, are you
> > allowed to send me a copy of the better one for free?
> > >
> > > Edward W. Porter
> > > Porter & Associates
> > > 24 String Bridge S12
> > > Exeter, NH 03833
> > > (617) 494-1722
> > > Fax (617) 494-1822
> > > [EMAIL PROTECTED]
> > >
> > >
> > >
> > >
> > > -----Original Message-----
> > > From: Pei Wang [ mailto:[EMAIL PROTECTED]
> > > Sent: Thursday, October 04, 2007 3:13 PM
> > > To: agi@v2.listbox.com
> > > Subject: Re: [agi] breaking the small hardware mindset
> > >
> > >
> > > On 10/4/07, Edward W. Porter <[EMAIL PROTECTED]> wrote:
> > > >
> > > >
> > > >
> > > > Josh,
> > > >
> > > > (Talking of "breaking the small hardware mindset," thank god for the
> > > > company with the largest hardware mindset -- or at least the largest
> > > > physical embodiment of one-- Google.  Without them I wouldn't have
> > > > known what "FARG" meant, and would have had to either (1) read your
> > > > valuable response with less than the understanding it deserves or (2)
> > > > embarrassed myself by admitting ignorance and asking for a
> > > > clarification.)
> > > >
> > > > With regard to your answer, copied below, I thought the answer would
> > > > be something like that.
> > > >
> > > > So which of the below types of "representational problems" are the
> > > > reasons why their basic approach is not automatically extendable?
> > > >
> > > >
> > > > 1. They have no general purpose representation that can represent
> > > > almost anything in a sufficiently uniform representational scheme to
> > > > let their analogy net matching algorithm be universally applied
> > > > without requiring custom patches for each new type of thing to be
> > > > represented.
> > > >
> > > > 2. They have no general purpose mechanism for determining what are
> > > > relevant similarities and generalities across which to allow slippage
> > > > for purposes of analogy.
> > > >
> > > > 3. They have no general purpose mechanism for automatically finding
> > > > which compositional patterns map to which lower level representations,
> > > > and which of those compositional patterns are similar to each other in
> > > > a way appropriate for slippages.
> > > >
> > > > 4. They have no general purpose mechanism for automatically
> > > > determining what would be appropriately coordinated slippages in
> > > > semantic hyperspace.
> > > >
> > > > 5. Some reason not listed above.
> > > >
> > > > I don't know the answer.  There is no reason why you should.  But if
> > > > you -- or any other interested reader –  do, or if you have any good
> > > > thoughts on the subject, please tell me.
> > >
> > > I guess I do know more on this topic, but it is a long story for which I
> > don't have the time to tell. Hopefully the following paper can answer some
> > of the questions:
> > >
> > > A logic of categorization
> > > Pei Wang and Douglas Hofstadter
> > > Journal of Experimental & Theoretical Artificial Intelligence, Vol.18,
> > No.2, Pages 193-213, 2006
> > >
> > > Pei
> > >
> > > > I may be naïve.  I may be overly big-hardware optimistic.  But based
> > > > on the architecture I have in mind, I think a Novamente-type system,
> > > > if it is not already architected to do so, could be modified to handle
> > > > all of these problems (except perhaps 5, if there is a 5) and, thus,
> > > > provide powerful analogy drawing across virtually all domains.
> > > >
> > > > Edward W. Porter
> > > > Porter & Associates
> > > > 24 String Bridge S12
> > > > Exeter, NH 03833
> > > > (617) 494-1722
> > > > Fax (617) 494-1822
> > > > [EMAIL PROTECTED]
> > > >
> > > >
> > > >
> > > > -----Original Message-----
> > > > From: J Storrs Hall, PhD [ mailto:[EMAIL PROTECTED]
> > > > Sent: Thursday, October 04, 2007 1:44 PM
> > > > To: agi@v2.listbox.com
> > > > Subject: Re: [agi] breaking the small hardware mindset
> > > >
> > > >
> > > >
> > > > On Thursday 04 October 2007 10:56:59 am, Edward W. Porter wrote:
> > > > > You appear to know more on the subject of current analogy drawing
> > > > > research than me. So could you please explain to me what are the
> > > > > major current problems people are having in trying figure out how to
> > > > > draw analogies using a structure mapping approach that has a
> > > > > mechanism for coordinating similarity slippage, an approach somewhat
> > > > > similar to Hofstadter approach in Copycat?
> > > >
> > > > > Lets say we want a system that could draw analogies in real time
> > > > > when generating natural language output at the level people can,
> > > > > assuming there is some roughly semantic-net like representation of
> > > > > world knowledge, and lets say we have roughly brain level hardware,
> > > > > what ever that is.  What are the current major problems?
> > > >
> > > > The big problem is that structure mapping is brittlely dependent on
> > > > representation, as Hofstadter complains; but that the FARG school
> > > > hasn't really come up with a generative theory (every Copycat-like
> > > > analogizer requires a pile of human-written Codelets which increases
> > > > linearly with the knowledge base -- and thus there is a real problem
> > > > building a Copycat that can learn its concepts).
> > > >
> > > > In my humble opinion, of course.
> > > >
> > > > Josh
> > > >
> > > > -----
> > > > This list is sponsored by AGIRI: http://www.agiri.org/email To
> > > > unsubscribe or change your options, please go to:
> > > > http://v2.listbox.com/member/?&;
> > > >
> > > >  ________________________________
> > > >  This list is sponsored by AGIRI: http://www.agiri.org/email To
> > > > unsubscribe or change your options, please go to:
> > > > http://v2.listbox.com/member/?&;
> > >
> > >
> > > -----
> > > This list is sponsored by AGIRI: http://www.agiri.org/email
> > > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
> > >
> > > ________________________________
> >  This list is sponsored by AGIRI: http://www.agiri.org/email
> > > To unsubscribe or change your options, please go to:
> > > http://v2.listbox.com/member/?&;
> >
> >  ________________________________
> >  This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
>
>

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=8660244&id_secret=50188520-590fc9

Reply via email to