On Sat, Sep 20, 2008 at 6:24 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote:

> --- On Sat, 9/20/08, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >If formal reasoning were a solved problem in AI, then we would have
> theorem-provers that could prove deep, complex theorems unassisted.   We
> don't.  This indicates that formal reasoning is NOT a solved problem,
> because no one has yet gotten "history guided adaptive inference control" to
> really work well.  Which is IMHO because formal reasoning guidance
> ultimately requires the same kind of analogical, contextual commonsense
> reasoning as guidance of reasoning about everyday life...
>
> I mean that formal reasoning is solved in the sense of executing
> algorithms, once we can state the problems in that form. I know that some
> problems in CS are hard. I think that the intuition that mathematicians use
> to prove theorems is a language modeling problem.



It seems a big stretch to me to call theorem-proving guidance a "language
modeling problem" ... one may be able to make sense of this statement, but
only by treating the concept of language VERY abstractly, differently from
the commonsense use of the word...

Lakoff and Nunez have made strong arguments that mathematical reasoning is
guided by embodiment-related intuition.

Of course, one can model all of physical reality using formal language
theory, in which case all of intelligence becomes language modeling ... but
it's not clear to me what is gained by adopting this terminology and
perspective.


>
>
> >Also, you did not address my prior point that Hebbian learning at the
> neural level is strikingly similar to formal logic...
>
> I agree that neural networks can model formal logic. However, I don't think
> that formal logic is a good way to model neural networks.
>

I'm not talking about either of those.  Of course logic and NN's can be used
to model each other (as both are Turing-complete formalisms), but that's not
the point I was making.

The point I was making is that certain NN's and certain logic systems are
highly analogous to each other in the kinds of operations they carry out and
how they organize these operations.  Both implement very similar cognitive
processes.


>
> Language learning consists of learning associations between concepts
> (possibly time-delayed, enabling prediction) and learning new concepts by
> clustering in context space. Both of these operations can be done
> efficiently and in parallel with neural networks. They can't be done
> efficiently with logic.
>

I disagree that association-learning and clustering cannot be done
efficiently in a logic system.

I also disagree that these are the hard parts of cognition, though I do
think they are necessary parts.

>
> There is experimental evidence to back up this view. The top two
> compressors in my large text benchmark use dictionaries in which
> semantically related words are grouped together and the groups are used as
> context. In the second place program (paq8hp12any), the grouping was done
> mostly manually. In the top program (durilca4linux), the grouping was done
> by clustering in context space.


In my view, current text compression algorithms, which are essentially based
on word statistics, have fairly little to do with AGI ... so looking at
which techniques are best for staistical text compression is not very
interesting to me.

I understand that

1)
there is conceptual similarity btw text compression and AGI, in that both
involve recognition of probabilistic patterns

2)
ultimately, an AGI will be able to compress text way better than our current
compression algorithms

But neverthless, I don't think that the current best-of-breed text
processing approaches have much to teach us about AGI.

To pursue an overused metaphor, to me that's sort of like trying to
understand flight by carefully studying the most effective high-jumpers.
OK, you might learn something, but you're not getting at the crux of the
problem...

-- Ben G



-------------------------------------------
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244&id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com

Reply via email to