On 11/2/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Google uses a cluster of 10^6 CPUs, enough to keep a copy of the searchable
> part of the Internet in RAM.
And a list of millions of hits is the ideal way to represent the
results, right? Ask.com is publicly mocking this fact in an effort to
m
--- Mike Dougherty <[EMAIL PROTECTED]> wrote:
> On 11/2/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> > Well, one alternative is to deduce that aluminum is a mass noun by the low
> > frequency of phrases like "an aluminum is" from a large corpus of text (or
> > count Google hits). You could also
On 11/2/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Well, one alternative is to deduce that aluminum is a mass noun by the low
> frequency of phrases like "an aluminum is" from a large corpus of text (or
> count Google hits). You could also deduce that aluminum is an adjective from
> phrases lik
Linas,
I mainly tried to show that you are in fact not moving your system
forward learning-wise by attaching a chatbot facade to it. That "My
scaffolding learns" is an overstatement in this context.
You should probably move in the direction of NARS, it seems
fundamental enough to be near the mark
--- Linas Vepstas <[EMAIL PROTECTED]> wrote:
> So, after asserting "aluminum is a mass noun", it might plausibly deduce
> "most minerals are mass nouns" -- one could call this "data mining".
> This would use the same algo as deducing that many of the things called
> "lincoln" are "counties".
>
>
On Sat, Nov 03, 2007 at 12:06:48AM +0300, Vladimir Nesov wrote:
> On 11/2/07, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> > On Fri, Nov 02, 2007 at 10:34:26PM +0300, Vladimir Nesov wrote:
> > > On 11/2/07, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> > > > On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vla
On 11/2/07, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> On Fri, Nov 02, 2007 at 10:34:26PM +0300, Vladimir Nesov wrote:
> > On 11/2/07, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> > > On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
> > > > But learning problem isn't changed by it. And
On Fri, Nov 02, 2007 at 10:34:26PM +0300, Vladimir Nesov wrote:
> On 11/2/07, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> > On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
> > > But learning problem isn't changed by it. And if you solve the
> > > learning problem, you don't need any
On 11/2/07, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
> > But learning problem isn't changed by it. And if you solve the
> > learning problem, you don't need any scaffolding.
>
> But you won't know how to solve the learning problem un
On Fri, Nov 02, 2007 at 08:51:43PM +0300, Vladimir Nesov wrote:
> But learning problem isn't changed by it. And if you solve the
> learning problem, you don't need any scaffolding.
But you won't know how to solve the learning problem until you try.
--linas
-
This list is sponsored by AGIRI:
On 11/2/07, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> On Fri, Nov 02, 2007 at 09:01:42AM -0700, Charles D Hixson wrote:
> > To me this point seems only partially valid. 1M hand coded rules seems
> > excessive, but there should be some number (100? 1000?) of hand-coded
> > rules (not unchangeable!
On Fri, Nov 02, 2007 at 09:01:42AM -0700, Charles D Hixson wrote:
> To me this point seems only partially valid. 1M hand coded rules seems
> excessive, but there should be some number (100? 1000?) of hand-coded
> rules (not unchangeable!) that it can start from. An absolute minimum
> would see
On Fri, Nov 02, 2007 at 11:27:08AM +0300, Vladimir Nesov wrote:
> Linas,
>
> Yes, you probably can code all the patterns you need. But it's only
> the tip of the iceberg: problem is that for those 1M rules there are
> also thousands that are being constantly generated, assessed and
> discarded. Kn
To me this point seems only partially valid. 1M hand coded rules seems
excessive, but there should be some number (100? 1000?) of hand-coded
rules (not unchangeable!) that it can start from. An absolute minimum
would seem to be "everything in 'Fun with Dick and Jane' through 'My
Little White
Linas,
Yes, you probably can code all the patterns you need. But it's only
the tip of the iceberg: problem is that for those 1M rules there are
also thousands that are being constantly generated, assessed and
discarded. Knowledge formation happens all the time and adapts those
1M rules to gazillio
On 11/1/07, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> Yes, I figured as much. I haven't yet seen a cogent explanation of
> why folks gave up. For shrdlu, sure .. compute power was limited.
IIRC, the reason SHRDLU wasn't taken any further wasn't to do with
computing power, it was because the progr
--- Linas Vepstas <[EMAIL PROTECTED]> wrote:
> On Thu, Nov 01, 2007 at 02:58:07PM -0700, Matt Mahoney wrote:
> > There is a great temptation to insert knowledge directly,
> > but the result is always the same. Natural language is a complicated
> beast.
> > You cannot hand code all the language
On Thu, Nov 01, 2007 at 06:58:14PM -0400, Pei Wang wrote:
> On 11/1/07, Linas Vepstas <[EMAIL PROTECTED]> wrote:
> >
> > More importantly, I've started struggling with representing
> > conversational state. i.e. "what are we talking about?" "what
> > has been said so far?" I've got some inkling on
On 11/1/07, Linas Vepstas <[EMAIL PROTECTED]> wrote:
>
> More importantly, I've started struggling with representing
> conversational state. i.e. "what are we talking about?" "what
> has been said so far?" I've got some inkling on how to expand
> conversational state, but its ad hoc so far.
>
> Thu
On Thu, Nov 01, 2007 at 02:58:07PM -0700, Matt Mahoney wrote:
> --- Linas Vepstas <[EMAIL PROTECTED]> wrote:
>
> > Thus, I find that my interests are now turning to representing
> > conversational state. How does novamente deal with it? What
> > about Pei Wang's NARS? It seems that NARS is a reaso
--- Linas Vepstas <[EMAIL PROTECTED]> wrote:
> Thus, I find that my interests are now turning to representing
> conversational state. How does novamente deal with it? What
> about Pei Wang's NARS? It seems that NARS is a reasoning system;
> great; but what is holding me back right now is not an ab
On Wed, Oct 31, 2007 at 05:53:48PM -0700, Matt Mahoney wrote:
> --- Linas Vepstas <[EMAIL PROTECTED]> wrote:
> > Aside from Novamente and CYC, who else has attempted to staple
> > NLP to a reasoning engine?
>
> Many have tried, such as BASEBALL in 1961 [1] and SHRDLU in 1968-70 [2]. But
Thanks,
22 matches
Mail list logo