Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
ill be a smaller place. > > Posted by Chad Hurley, CEO and Co-Founder, YouTube > ------ > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbo

Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
On Fri, Sep 19, 2008 at 10:46 AM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Fri, 9/19/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > Mike, Google has had basically no impact on the AGI thinking of myself or > 95% of the other serious AGI researchers I know... >

Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
> *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, P

Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
> > That's the main reason why you think logic, maths and language are all you > really need for intelligence - paper. > Just for clarity: while I think that in principle one could make a maths-only AGI, my present focus is on building an AGI that is embodied in virtual robots and potentially real

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-19 Thread Ben Goertzel
Matt wrote, > There seems to be a lot of effort to implement reasoning in knowledge > representation systems, even though it has little to do with how we actually > think. Please note that not all of us in the AGI field are trying to closely emulate human thought. Human-level thought does not

Re: [agi] Where the Future of AGI Lies

2008-09-19 Thread Ben Goertzel
per that > demo? And do either virtual/real robots use vision? > > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscripti

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
entioned several times on this list that NARS has no > proper probabilistic interpretation. But, I think I have found one > that works OK. Not perfectly. There are some differences, but the > similarity is striking (at least to me). > > I imagine that what I have come up with is not

[agi] Convergence08 future technology conference...

2008-09-20 Thread Ben Goertzel
savvy audience. If you are unsure about your subject matter, please feel free to run the idea by co-organizer James Clement<[EMAIL PROTECTED]> ___ extropy-chat mailing list [EMAIL PROTECTED] http://lists.extropy.org/mailman/listinfo.cgi/extropy

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
On Sat, Sep 20, 2008 at 4:44 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Fri, 9/19/08, Jan Klauck <[EMAIL PROTECTED]> wrote: > > > Formal logic doesn't scale up very well in humans. That's why this > > kind of reasoning is so unpopular. Our capacities are that > > small and we connect to

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
> > I haven't read the PLN book yet (though I downloaded a copy, thanks!), > but at present I don't see why term probabilities are needed... unless > inheritance relations "A inh B" are interpreted as conditional > probabilities "A given B". I am not interpreting them that way-- I am > just treatin

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
> And the definition 3.7 that you mentioned *does* match up, perfectly, > when the {w+, w} truth-value is interpreted as a way of representing > the likelihood density function of the prob_inh. Easy! The challenge > is section 4.4 in the paper you reference: syllogisms. The way > evidence is spread

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
On Sat, Sep 20, 2008 at 6:24 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Sat, 9/20/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > >If formal reasoning were a solved problem in AI, then we would have > theorem-provers that could prove deep, complex theorems unassis

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
> Beside the problem you mentioned, there are other issues. Let me start > at the basic ones: > > (1) In probability theory, an event E has a constant probability P(E) > (which can be unknown). Given the assumption of insufficient knowledge > and resources, in NARS P(A-->B) would change over time,

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
> > > >To pursue an overused metaphor, to me that's sort of like trying to > understand flight by carefully studying the most effective high-jumpers. > OK, you might learn something, but you're not getting at the crux of the > problem... > > A more appropriate metaphor is that text compression is t

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
> > (2) For the same reason, in NARS a statement might get different > > "probability" attached, when derived from different evidence. > > Probability theory does not have a general rule to handle > > inconsistency within a probability distribution. > > The same statement holds for PLN, right? PL

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
> > > Think about a concrete example: if from one source the system gets > P(A-->B) = 0.9, and P(P(A-->B) = 0.9) = 0.5, while from another source > P(A-->B) = 0.2, and P(P(A-->B) = 0.2) = 0.7, then what will be the > conclusion when the two sources are considered together? There are many approac

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
> > (And can you provide an example of a single surprising metaphor or analogy > that have ever been derived logically? Jiri said he could - but didn't.) It's a bad question -- one could derive surprising metaphors or analogies by random search, and that wouldn't prove anything useful about the

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
> principle of maximum/optimum entropy. They usually requires much more > information (or assumptions) than what is given in the following > example. > > I'd be interested to know what the solution they will suggest for such > a situation. > > Pei > &g

Re: [agi] NARS probability

2008-09-20 Thread Ben Goertzel
On Sat, Sep 20, 2008 at 10:32 PM, Pei Wang <[EMAIL PROTECTED]> wrote: > I found the paper. > > As I guessed, their update operator is defined on the whole > probability distribution function, rather than on a single probability > value of an event. I don't think it is practical for AGI --- we cann

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
on > ILLOGICALITY. Take the classic Jewish joke about the woman who, told that > her friend's son has the psychological problem of an Oedipus Complex, says: > "Oedipus Schmoedipus, what does it matter as long as he loves his mother?" > And your logical explanation is..? &

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
and not to forget... SATAN GUIDES US TELEPATHICLY THROUGH RECTAL THERMOMETERS. WHY DO YOU THINK ABOUT META-REASONING? On Sat, Sep 20, 2008 at 11:38 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > Mike, > > I understand that "my task" is to create an AGI sys

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-20 Thread Ben Goertzel
g >> this matter urgently, not evading it.. >> >> P.P.S. You should also bear in mind that a vast amount of jokes (which >> involve the surprising crossing of domains) explicitly depend on >> ILLOGICALITY. Take the classic Jewish joke about the woman who, told that >&

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Ben Goertzel
> Now if you want to compare gzip, a chimpanzee, and a 2 year old child using > language prediction as your IQ test, then I would say that gzip falls in the > middle. A chimpanzee has no language model, so it is lowest. A 2 year old > child can identify word boundaries in continuous speech, can sem

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Ben Goertzel
> > > I'm not building AGI. (That is a $1 quadrillion problem). I'm studying > algorithms for learning language. Text compression is a useful tool for > measuring progress (although not for vision). OK, but the focus of this list is supposed to be AGI, right ... so I suppose I should be forgiven

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Ben Goertzel
/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "Nothing will ever be attempted if all possible obje

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Ben Goertzel
yes, but your cost estimate is based on some very odd and specialized assumptions regarding AGI architecture!!! On Sun, Sep 21, 2008 at 8:12 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > >That seems a

Re: The brain does not implement formal logic (was Re: [agi] Where the Future of AGI Lies)

2008-09-21 Thread Ben Goertzel
On Sun, Sep 21, 2008 at 8:08 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > >Text compression is IMHO a terrible way of measuring incremental progress > toward AGI. Of course it may be very valuable for other

Re: [agi] re: NARS probability

2008-09-21 Thread Ben Goertzel
S. > > --Abram > > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.lis

Re: [agi] Re: AGI for a quadrillion

2008-09-21 Thread Ben Goertzel
alds' are refit ... ;-) ben g On Sun, Sep 21, 2008 at 9:54 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Sun, 9/21/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > >yes, but your cost estimate is based on some very odd and specialized > assumptions regarding

Re: [agi] re: NARS probability

2008-09-21 Thread Ben Goertzel
On Sun, Sep 21, 2008 at 10:43 PM, Abram Demski <[EMAIL PROTECTED]>wrote: > The calculation in which I sum up a bunch of pairs is equivalent to > doing NARS induction + abduction with a final big revision at the end > to combine all the accumulated evidence. But, like I said, I need to > provide a

Re: [agi] re: NARS probability

2008-09-22 Thread Ben Goertzel
e other. So, it has a 'par' > value just like inheritance statements do. If there was evidence for a > low par, there would be an effect in the direction you want. (It might > be way too small, though?) > > --Abram > > On Sun, Sep 21, 2008 at 10:46 PM, Ben Goertz

Re: [agi] re: NARS probability

2008-09-22 Thread Ben Goertzel
22, 2008 at 12:18 PM, Abram Demski <[EMAIL PROTECTED]>wrote: > Sure, but it is a consistent extension; {A}-statements have a strongly > NARS-like semantics, so we know they won't just mess everything up. > > On Mon, Sep 22, 2008 at 11:31 AM, Ben Goertzel <[EMAIL PROTECTED]&

Re: [agi] re: NARS probability

2008-09-22 Thread Ben Goertzel
> > -Abram > > On Mon, Sep 22, 2008 at 1:28 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > > The {A} statements are consistent with NARS, but the existing NARS > inference > > rules don't use these statements... > > > > A related train of th

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-22 Thread Ben Goertzel
Hi Pei, > Assuming 4 input judgments, with the same default confidence value (0.9): > > (1) {Ben} --> AGI-author <1.0;0.9> > (2) {dude-101} --> AGI-author <1.0;0.9> > (3) {Ben} --> odd-people <1.0;0.9> > (4) {dude-102} --> odd-people <1.0;0.9> > > From (1) and (2), by abduction, NARS derives (5)

[agi] Intelligence testing for AGI systems aimed at human-level, roughly human-like AGI

2008-09-22 Thread Ben Goertzel
See http://goertzel.org/agiq.pdf for an essay I just wrote on this topic... Comments actively solicited!! ben g -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "Nothing will ever be attempted if all possible objections must be

Re: [agi] uncertain logic criteria

2008-09-23 Thread Ben Goertzel
inference. Guesses, systematically managed, may help on the way from definite premises to definite conclusions... ben g On Tue, Sep 23, 2008 at 3:31 AM, YKY (Yan King Yin) < [EMAIL PROTECTED]> wrote: > On Thu, Sep 18, 2008 at 3:06 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > >

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-23 Thread Ben Goertzel
x.com/member/archive/303/=now > > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > > Modify Your Subscription: https://www.listbox.com/member/?&; > > Powered by Listbox: http://www.listbox.com > > > > > --- > agi > Archives: https://www.listbox.com/member/ar

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-23 Thread Ben Goertzel
> > Yes. One of my biggest practical complaints with NARS is that the > induction > > and abduction truth value formulas don't make that much sense to me. > > I guess since you are trained as a mathematician, your "sense" has > been formalized by probability theory to some extent. ;-) > Actually,

Re: [agi] Call yourself mathematicians? [O/T]

2008-09-23 Thread Ben Goertzel
." > > > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by L

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-23 Thread Ben Goertzel
> > PLN needs to make assumptions about node probability in this case; but > NARS > > also makes assumptions, it's just that NARS's assumptions are more deeply > > hidden in the formalism... > > If you means assumptions like "insufficient knowledge and resources", > you are right, but that is not a

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-23 Thread Ben Goertzel
On Tue, Sep 23, 2008 at 9:28 PM, Pei Wang <[EMAIL PROTECTED]> wrote: > On Tue, Sep 23, 2008 at 7:26 PM, Abram Demski <[EMAIL PROTECTED]> > wrote: > > Wow! I did not mean to stir up such an argument between you two!! > > Abram: This argument has been going on for about 10 years, with some > "on" pe

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-23 Thread Ben Goertzel
> I think it's mathematically and conceptually clear that for a system with > unbounded > resources probability theory is the right way to reason. However if you > look > at Cox's axioms > > http://en.wikipedia.org/wiki/Cox%27s_theorem > > you'll see that the third one (consistency) cannot reason

Re: [agi] Call yourself mathematicians? [O/T]

2008-09-23 Thread Ben Goertzel
<https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Researc

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-24 Thread Ben Goertzel
> > > > > I mean assumptions like "symmetric treatment of intension and extension", > > which are technical mathematical assumptions... > > But they are still not assumptions about domain knowledge, like node > probability. > Well, in PLN the balance between intensional and extensional knowledge

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-24 Thread Ben Goertzel
On Wed, Sep 24, 2008 at 11:43 AM, Pei Wang <[EMAIL PROTECTED]> wrote: > The distinction between object-level and meta-level knowledge is very > clear in NARS, though I won't push this issue any further. yes, but some of the things you push into the meta-level knowledge in NARS, seem more like th

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-24 Thread Ben Goertzel
> > >> I guess my previous question was not clear enough: if the only domain > >> knowledge PLN has is > >> > >> > Ben is an author of a book on AGI > >> > This dude is an author of a book on AGI > >> > >> and > >> > >> > Ben is odd > >> > This dude is odd > >> > >> Will the system derives anyt

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-24 Thread Ben Goertzel
OK, we're done with AGI, time to move on to discussion of psychic powers 8-D On Wed, Sep 24, 2008 at 12:17 PM, Pei Wang <[EMAIL PROTECTED]> wrote: > Thanks for the detailed answer. Now I'm happy, and we can turn to > something else. ;-) > > Pei > > On Wed, Sep

Re: [agi] NARS vs. PLN [Was: NARS probability]

2008-09-24 Thread Ben Goertzel
> > If we have > > Ben ==> AGI-author > Dude ==> AGI-author > |- > Dude ==> Ben > > the PLN abduction rule would yield > > s3 = s1 s2 + w (1-s1)(1-s2) > But ... before we move on to psychic powers, let me note that this PLN abduction strength rule (simplified for the case of equal node proba

Re: [agi] universal logical form for natural language

2008-09-27 Thread Ben Goertzel
> > IMO Cyc's problem is due to: > 1. the lack of a well-developed probabilistic/fuzzy logic (thus > brittleness) Cyc has local Bayes nets within their knowledge base... > > 2. the emphasis on ontology (plain facts) rather than "production rules" > While I agree that formulating knowledge

Re: [agi] universal logical form for natural language

2008-09-27 Thread Ben Goertzel
YKY, > > Example of a commonsense fact: "apples are red" > > Example of a commonsense rule: "if X is female X has an above-average > chance of having long hair" > > Cyc already has loads of these rules. If you have a problem with Cyc's format, I **strongly** suggest that you first play around

Re: [agi] universal logical form for natural language

2008-09-28 Thread Ben Goertzel
sible, hierarchical connectionist models, although they lacked > the computing power to implement them. > > -- Matt Mahoney, [EMAIL PROTECTED] > > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https:

Re: [agi] universal logical form for natural language

2008-09-28 Thread Ben Goertzel
On Sun, Sep 28, 2008 at 3:09 AM, YKY (Yan King Yin) < [EMAIL PROTECTED]> wrote: > On Sun, Sep 28, 2008 at 1:52 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > Cyc already has loads of these rules. > > I wasn't aware of those, but I'll check it out. My

Re: [agi] universal logical form for natural language

2008-09-28 Thread Ben Goertzel
g out that they *are* explicitly devoting a lot of resources to the problem ... ben g On Sun, Sep 28, 2008 at 9:38 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Sun, 9/28/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > >FYI, Cyc has a natural language front end

Re: [agi] universal logical form for natural language

2008-09-28 Thread Ben Goertzel
On Sun, Sep 28, 2008 at 10:00 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Sun, 9/28/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > >Yes, the big weakness of the whole Cyc framework is learning. Their logic > engine seems to be pretty poor at incremental, e

Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
On Mon, Sep 29, 2008 at 4:23 AM, YKY (Yan King Yin) < [EMAIL PROTECTED]> wrote: > On Mon, Sep 29, 2008 at 4:10 AM, Abram Demski <[EMAIL PROTECTED]> > wrote: > > > > How much will you focus on natural language? It sounds like you want > > that to be fairly minimal at first. My opinion is that chatb

Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
rse context after spreading activation has quiesced. > > -Steve > > Stephen L. Reed > Artificial Intelligence Researcher > http://texai.org/blog > http://texai.org > 3008 Oak Crest Ave. > Austin, Texas, USA 78704 > 512.791.7860 > > - Original Message > From:

Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
On Mon, Sep 29, 2008 at 6:28 PM, Lukasz Stafiniak <[EMAIL PROTECTED]>wrote: > On Mon, Sep 29, 2008 at 11:33 PM, Eric Burton <[EMAIL PROTECTED]> wrote: > > > > It uses something called MontyLingua. Does anyone know anything about > > this? There's a site at > > http://web.media.mit.edu/~hugo/monty

Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
On Mon, Sep 29, 2008 at 6:03 PM, YKY (Yan King Yin) < [EMAIL PROTECTED]> wrote: > On Mon, Sep 29, 2008 at 9:18 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > Parsing English sentences into sets of formal-logic relationships is not > > extremely hard given curre

Re: [agi] Dangerous Knowledge

2008-09-29 Thread Ben Goertzel
> > > I mean that a more productive approach would be to try to understand why > the problem is so hard. IMO Richard Loosemore is half-right ... the reason AGI is so hard has to do with Santa Fe Institute style complexity ... Intelligence is not fundamentally grounded in any particular mechanis

Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
>> Dynamics (CIMDynamics) requires a MindOntology page explaining it >>> conceptually, in addtion to the existing nuts-and-bolts entry in the >>> OpenCogPrime section. >>> >>> -dave >>> >>> >>> >>> --------

Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
> > Cognitive linguistics also lacks a true deveopmental model of language > acquisition that goes beyond the first few years of life, and can embrace > all those several - and, I'm quite sure, absolutely necessary - stages of > mastering language and building a world picture. > Tomassello's theo

Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
d >> >> Texai considers all interpretations simultaneously, in a transient >> spreading activation network whose nodes are the semantic propositions >> contained within the elaborated discourse context and whose links are formed >> when propositions share an argument

Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now&

Re: [agi] universal logical form for natural language

2008-09-29 Thread Ben Goertzel
> My guess is that Schank and AI generally start from a technological POV, > conceiving of *particular* approaches to texts that they can implement, > rather than first attempting a *general* overview. I can't speak for Schank, who was however working a long time ago when cognitive science was l

Re: [agi] universal logical form for natural language

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 2:58 AM, YKY (Yan King Yin) < [EMAIL PROTECTED]> wrote: > On Tue, Sep 30, 2008 at 6:43 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > >> We are talking about 2 things: > >> 1. Using an "ad hoc" parser to transl

Re: [agi] universal logical form for natural language

2008-09-30 Thread Ben Goertzel
Markov chains are one way of doing the math for spreading activation, but e.g. neural nets are another... On Tue, Sep 30, 2008 at 1:23 AM, Linas Vepstas <[EMAIL PROTECTED]>wrote: > 2008/9/29 Ben Goertzel <[EMAIL PROTECTED]>: > > > > Stephen, > > > >

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
asn't been publicized yet ... but it does already address this particular issue...) ben On Tue, Sep 30, 2008 at 12:23 PM, Terren Suydam <[EMAIL PROTECTED]> wrote: > > Hi Ben, > > If Richard Loosemore is half-right, how is he half-wrong? > > Terren > > --- On *Mon, 9/2

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 12:45 PM, Mike Tintner <[EMAIL PROTECTED]>wrote: > Ben: the reason AGI is so hard has to do with Santa Fe Institute style > complexity ... > > Intelligence is not fundamentally grounded in any particular mechanism but > rather in emergent structures > and dynamics that ari

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 2:08 PM, Jim Bromer <[EMAIL PROTECTED]> wrote: > From: "Ben Goertzel" <[EMAIL PROTECTED]> > To give a brief answer to one of your questions: analogy is > mathematically a matter of finding mappings that match certain > constraints. T

Re: [agi] universal logical form for natural language

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 2:43 PM, Lukasz Stafiniak <[EMAIL PROTECTED]>wrote: > On Tue, Sep 30, 2008 at 3:38 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > > Markov chains are one way of doing the math for spreading activation, but > > e.g. > > neural ne

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
> And if you look at your "brief answer" para, you will find that while you > talk of mappings and constraints, (which are not necessarily AGI at all), > you make no mention in any form of how complexity applies to the crossing of > hitherto unconnected "domains" [or matrices, frames etc], which, o

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
you to specify as much "appropriate data" as you like > - any data, of course, *currently* available). > > > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> |

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
On Tue, Sep 30, 2008 at 4:18 PM, Mike Tintner <[EMAIL PROTECTED]>wrote: > Ben, > > Well, funny perhaps to some. But nothing to do with AGI - which has > nothing to with "well-defined problems." > > I wonder if you are misunderstanding his use of terminology. How about the problem of gathering

[agi] This NSF solicitation might be interesting to some of you...

2008-09-30 Thread Ben Goertzel
Encouraging Submission of Proposals involving Complexity and Interacting Systems to Programs in the Social, Behavioral and Economic Sciences : http://www.nsf.gov/pubs/2008/nsf08014/nsf08014.jsp -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED

Re: [agi] Dangerous Knowledge

2008-09-30 Thread Ben Goertzel
> You have already provided one very suitable example of a general AGI > problem - how is your pet having learnt one domain - to play "fetch", - to > use that knowledge to cross into another domain - to learn/discover the > game of "hide-and-seek."? But I have repeatedly asked you to give me you

[agi] OpenCogPrime for Dummies [NOT]

2008-09-30 Thread Ben Goertzel
mies" ... I just don't have time to write it right now I'm more motivated personally to spend time writing new technical stuff than writing better expositions of stuff I already wrote down ;-) ben g -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Resea

Re: [agi] Dangerous Knowledge

2008-10-01 Thread Ben Goertzel
I was saying that most > people don't have any idea what I mean when I talk about things like > interrelated ideological structures in an ambiguous environment, and > that this issue was central to the contemporary problem, Maybe the reason people don't know what you mean, is that your manner

Re: [agi] universal logical form for natural language

2008-10-01 Thread Ben Goertzel
> > > No, the mainstream method of extracting knowledge from text (other than > manually) is to ignore word order. In artificial languages, you have to > parse a sentence before you can understand it. In natural language, you have > to understand the sentence before you can parse it. More exactl

Re: [agi] OpenCogPrime for Dummies [NOT]

2008-10-01 Thread Ben Goertzel
On Wed, Oct 1, 2008 at 2:07 PM, Steve Richfield <[EMAIL PROTECTED]>wrote: > Ben, > > I have been eagerly awaiting such a document. However, the Grand Technical > Guru (i.e. you) is usually NOT the person to write such a thing. Usually, an > associate, user, author, or some such person who is on th

Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Ben Goertzel
sually recognise the age of a person I'd tell them > > that they're > > probably wasting their time, and that indicators other than > > visual > > ones would be more likely to give a reliable result. > > > > ---

Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Ben Goertzel
On Thu, Oct 2, 2008 at 2:02 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Thu, 10/2/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > >I hope not to sound like a broken record here ... but ... not every > >narrow AI advance is actually a step toward AGI ... &

Re: [agi] Let's face it, this is just dumb.

2008-10-02 Thread Ben Goertzel
lex organizations quickly. > > -- Matt Mahoney, [EMAIL PROTECTED] > > --- On *Thu, 10/2/08, Ben Goertzel <[EMAIL PROTECTED]>* wrote: > > From: Ben Goertzel <[EMAIL PROTECTED]> > Subject: Re: [agi] Let's face it, this is just dumb. > To: agi@v2.listbox.com > Dat

Re: Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-03 Thread Ben Goertzel
Hi, > CMR (my proposal) has no centralized control (global brain). It is a > competitive market in which information has negative value. The environment > is a peer-to-peer network where peers receive messages in natural language, > cache a copy, and route them to appropriate experts based on con

Re: [agi] Testing, and a question....

2008-10-03 Thread Ben Goertzel
3 informally when I am > there if there's any interestso...which of these (any) is of > interest?...I'm not sure of the kinds of things you folk want to hear about. > All comments are appreciated. > > regards to all, > > Colin Hales > > > --

Re: [agi] I Can't Be In Two Places At Once.

2008-10-03 Thread Ben Goertzel
> RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Resea

Re: Risks of competitive message routing (was Re: [agi] Let's face it, this is just dumb.)

2008-10-04 Thread Ben Goertzel
On Fri, Oct 3, 2008 at 9:57 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: > --- On Fri, 10/3/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > You seem to misunderstand the notion of a Global Brain, see > > > > http://pespmc1.vub.ac.be/GBRAIFAQ.html > > > &

Re: [agi] COMP = false

2008-10-04 Thread Ben Goertzel
the eventual >> knowledge of the creature...they have already failed. I don't know >> whether the community has internalised this yet. >> >> Colin, >> >> I'm sure Ben is right, but I'd be interested to hear the essence of your >> empirical refu

Re: [agi] COMP = false

2008-10-04 Thread Ben Goertzel
On Sat, Oct 4, 2008 at 8:37 PM, Mike Tintner <[EMAIL PROTECTED]>wrote: > Matt:The problem you describe is to reconstruct this image given the highly > filtered and compressed signals that make it through your visual perceptual > system, like when an artist paints a scene from memory. Are you sayin

Re: [agi] COMP = false

2008-10-04 Thread Ben Goertzel
ps://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > > -- > *agi* | Archives <https://www.listbox.com/member

Re: [agi] OpenCogPrime for Dummies [NOT]

2008-10-05 Thread Ben Goertzel
ting > rather than the back and forth banter on this forum. > > With luck, it would help wring your ideas out and disarm your detractors, > and provide more than a mere writeup - a piece to help sell your concept on > a wider scale. > > Steve Richfield > === &

Re: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Ben Goertzel
> > 3. I think it is extremely important, that we give an AGI no bias about > space and time as we seem to have. Our intuitive understanding of space and > time is useful for our life on earth but it is completely wrong as we know > from theory of relativity and quantum physics. > > -Matthias Heger

Re: [agi] OpenCogPrime for Dummies [NOT]

2008-10-05 Thread Ben Goertzel
> > > On 10/5/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > >> >> Hmmm ... I doubt that a quick and dirty nontechnical >> > > I would think that it should be technical, e.g. targeted for someone with a > CS degree, but written as though the reader had

Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
rk, > 2006 > > I am realising that I may have a contribution to make to AGI by helping > strengthen its science base. I've run out of Sunday, so I'd like to leave > the discussion there... to be continued sometime. > > Meanwhile I'd encourage everyone to get used to

Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
Abram, thx for restating his argument > > Your argument appears to assume computationalism. Here is a numbered > restatement: > > 1. We have a visual experience of the world. > 2. Science says that the information from the retina is insufficient > to compute one. I do not understand his argume

Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
On Sun, Oct 5, 2008 at 7:41 PM, Abram Demski <[EMAIL PROTECTED]> wrote: > Ben, > > I have heard the argument for point 2 before, in the book by Pinker, > "How the Mind Works". It is the inverse-optics problem: physics can > predict what image will be formed on the retina from material > arrangemen

Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
On Sun, Oct 5, 2008 at 7:59 PM, Abram Demski <[EMAIL PROTECTED]> wrote: > Agreed. Colin would need to show the inadequacy of both inborn and > learned bias to show the need for extra input. But I think the more > essential objection is that extra input is still consistent with > computationalism.

Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
cool ... if so, I'd be curious for the references... I'm not totally up on that area... ben On Sun, Oct 5, 2008 at 8:20 PM, Trent Waddington <[EMAIL PROTECTED] > wrote: > On Mon, Oct 6, 2008 at 10:03 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > Arguably, f

Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
current >> issue. >> >> --Abram >> >> >> --- >> agi >> Archives: https://www.listbox.com/member/archive/303/=now >> RSS Feed: https://www.listbox.com/member/archive/rss/303/ >> Modify Your Subscription: https://www.listbo

Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
ram > >> > >> > >> --- > >> agi > >> Archives: https://www.listbox.com/member/archive/303/=now > >> RSS Feed: https://www.listbox.com/member/archive/rss/303/ > >> Modify Your Subscription: https://www.listbox.com/membe

Re: [agi] COMP = false

2008-10-05 Thread Ben Goertzel
On Sun, Oct 5, 2008 at 11:16 PM, Abram Demski <[EMAIL PROTECTED]> wrote: > Ben, > > I think the entanglement possibility is precisely what Colin believes. > That is speculation on my part of course. But it is something like > that. Also, it is possible that quantum computers can do more than > nor

<    1   2   3   4   5   6   7   8   9   10   >