Re: [agi] The Necessity of Embodiment

2008-08-30 Thread Ben Goertzel
About Friendly AI.. > > Let me put it this way: I would think anyone in a position to offer funding > for this kind of work would require good answers to the above. > > Terren My view is a little different. I think these answers are going to come out of a combination of theoretical advances w

Fwd: [agi] Re: Goedel machines ..PS

2008-08-30 Thread Ben Goertzel
*** So it could be a specific set of states? To specify long term growth as a goal, wouldn't you need to be able to do an abstract evaluation of how the state *changes* rather than just the current state? *** yes, and of course a GroundedPredicateNode could do that too ... the system can recall i

Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread Ben Goertzel
> > > Have you implemented a long term growth goal atom yet? Nope, right now we're just playing with virtual puppies, who aren't really explicitly concerned with long-term growth (plus of course various narrow-AI-ish applications of OpenCog components...) > Don't they have > to specify a speci

Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread Ben Goertzel
> > > Isn't it an evolutionary stable strategy for the modification system > module to change to a state where it does not change itself?1 Not if the top-level goals are weighted toward long-term growth > Let me > give you a just so story and you can tell me whether you think it > likely. I'd

Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread Ben Goertzel
On Sat, Aug 30, 2008 at 10:06 AM, William Pearson <[EMAIL PROTECTED]>wrote: > 2008/8/29 Ben Goertzel <[EMAIL PROTECTED]>: > > > > About recursive self-improvement ... yes, I have thought a lot about it, > but > > don't have time to write a huge discours

Re: [agi] Re: Goedel machines ..PS

2008-08-30 Thread Ben Goertzel
://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "Nothing will ever be

Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Ben Goertzel
On Fri, Aug 29, 2008 at 6:53 PM, Mike Tintner <[EMAIL PROTECTED]>wrote: Ben, It looks like what you've thought about is aspects of the information processing side of RSI but not the knowledge side. IOW you have thought about the technical side but not abouthow you progress from one domain of

Re: [agi] AGI-09 - Preliminary Call for Papers

2008-08-29 Thread Ben Goertzel
x.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research

Re: [agi] Re: Goedel machines ..PS

2008-08-29 Thread Ben Goertzel
nd interdependence of all systems. > > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription >

[agi] Talk on OpenCogPrime in the San Fran area

2008-08-28 Thread Ben Goertzel
All are welcome... -- Forwarded message -- From: Monica <[EMAIL PROTECTED]> Date: Thu, Aug 28, 2008 at 9:51 PM Subject: [ai-94] New Extraordinary Meetup: Ben Goertzel, Novamente To: [EMAIL PROTECTED] Announcing a new Meetup for Bay Area Artificial Intelligence Meetup

Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Ben Goertzel
hat value growth and spontaneity ... including growth of their goal systems in unpredictable, adaptive ways " -- Ben G On Tue, Aug 26, 2008 at 9:07 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > About play... I would argue that it emerges in any sufficiently > generally

Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Ben Goertzel
individual level. ben g On Tue, Aug 26, 2008 at 9:49 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > Examples of the kind of similarity I'm thinking of: > > -- The analogy btw chess or go and military strategy > > -- The analogy btw "roughhousing" and a

Re: [agi] How Would You Design a Play Machine?

2008-08-27 Thread Ben Goertzel
> > > If I do my job right, my AGI will have no "sense of self." I have doubts that is possible, though I'm sure you can make an AGI with a very different "sense of self" than any human has. My reasoning: 1) To get to a high level of intelligence likely requires some serious self-analysis and

Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Ben Goertzel
be you could give an example of what you mean by similarity > > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>

Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Ben Goertzel
/www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of R

Re: [agi] How Would You Design a Play Machine?

2008-08-26 Thread Ben Goertzel
Note that in this view play has nothing to do with having a body. An AGi concerned solely with mathematical theorem proving would also be able to play... On Tue, Aug 26, 2008 at 9:07 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > About play... I would argue that it emerges in any

Re: [agi] The Necessity of Embodiment

2008-08-14 Thread Ben Goertzel
l lot to > discuss here - it hasn't all been covered. > > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/

Re: [agi] The Necessity of Embodiment

2008-08-14 Thread Ben Goertzel
On Thu, Aug 14, 2008 at 6:59 AM, Mike Tintner <[EMAIL PROTECTED]>wrote: > Jim:I know that > there are no solid reasons to believe that some kind of embodiment is > absolutely necessary for the advancement of agi. > > I want to concentrate on one dimension of this: precisely the "solid" > dimension

Re: [agi] PLN and Bayes net comparison

2008-08-13 Thread Ben Goertzel
;extension distinction is getting swept behind the scenes here, into the definition of InheritanceLink... ben On Wed, Aug 13, 2008 at 8:13 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > YKY asked: > > >> I'm interested in how the the rules are "fetche

Re: [agi] PLN and Bayes net comparison

2008-08-13 Thread Ben Goertzel
YKY asked: > I'm interested in how the the rules are "fetched" from memory, and how the > variables get instantiated, etc... > > How would you represent the given facts: >"John is male" > "John is unmarried" > and then perform the inference to get > "John is a bachelor"? > > Sorry if

Re: [agi] PLN and Bayes net comparison

2008-08-12 Thread Ben Goertzel
//www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "Nothing will ev

Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Ben Goertzel
brain-based AGI system could of course be built within the OpenCog Framework, which would be good fun... ben G On 8/10/08, John LaMuth <[EMAIL PROTECTED]> wrote: > > - Original Message - > > *From:* Ben Goertzel <[EMAIL PROTECTED]> > *To:* agi@v2.listbox.com > *

Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Ben Goertzel
> Or even simpler problems, like : how were you to handle the angry Richard > recently? Your response, and I quote: "Aaargh!" (as in "how on earth do I > calculate my probabilities and Bayes?" and "which school of psychological > thought is relevant here?") Now you're talking AGI. There is no ratio

Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Ben Goertzel
On Sun, Aug 10, 2008 at 5:52 PM, Mike Tintner <[EMAIL PROTECTED]>wrote: > Will, > > Maybe I should have explained the distinction more fully. A totalitarian > system is one with an integrated system of decisionmaking, and unified > goals. A "democratic", "conflict system is one that takes decision

Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Ben Goertzel
ised and unhappy when it happens. > Funding and support questions and all. > andi > > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/

Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Ben Goertzel
> > And I've said it before, but it bears repeating in this context. Real > intelligence requires that mistakes be made. And that's at odds with > regular programming, because you are trying to write programs that don't > make mistakes, so I have to wonder how serious people really would be > abo

Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Ben Goertzel
es <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > > -- > *agi* | Archives <

Re: [agi] The Necessity of Embodiment

2008-08-10 Thread Ben Goertzel
On Sun, Aug 10, 2008 at 9:02 AM, Mike Tintner <[EMAIL PROTECTED]>wrote: > Ben:but, from a practical perspective, it seems more useful to think > about minds that are rougly similar to human minds, yet better adapted to > existing computer hardware, and lacking humans' most severe ethical and > mo

[agi] Announcement: OpenCogPrime Tutorial Chat Sessions, Sep 08 - Jan 09

2008-08-09 Thread Ben Goertzel
fact this would be quite desirable. -- Ben G -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "Nothing will ever be attempted if all possible objections must be first overcome " - Dr Samuel Johnson ---

Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Ben Goertzel
ntelligence >> indeed. >> >> > > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox

Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Ben Goertzel
//www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Directo

Re: [agi] brief post on possible path to agi

2008-08-09 Thread Ben Goertzel
a. But again, brains are not just soups of heterogenous processes -- the right high-level cognitive architecture is required. -- Ben Goertzel ... novamente.net agiri.org singinst.org goertzel.org opencog.org On Sat, Aug 9, 2008 at 1:01 PM, rick the ponderer <[EMAIL PROTECTED]> wrote:

Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Ben Goertzel
On Sat, Aug 9, 2008 at 9:30 AM, Mike Tintner <[EMAIL PROTECTED]>wrote: > Ben, > > I clearly understood/understand this. My point is: are you guys' notions > of non-human intelligence anything more than sci-fi fantasy as opposed to > serious invention? To be the latter, you must have some half-co

Re: [agi] The Necessity of Embodiment

2008-08-09 Thread Ben Goertzel
On Sat, Aug 9, 2008 at 7:35 AM, Mike Tintner <[EMAIL PROTECTED]>wrote: > Brad: > Sigh. Your point of view is heavily biased by the unspoken assumption that > AGI > must be Turing-indistinguishable from humans. That it must be AGHI. > > Brad, > > Literally: "what on earth are you talking about?"

[agi] PLN and default inference

2008-08-08 Thread Ben Goertzel
much, but I thought a bit about how they might fit into the PLN framework ... thoughts are in attached document This is technical stuff and the attached doc is written for someone who knows both PLN and default/epistemic logic, so if you're baffled, no worries ;-) ben -- Ben Go

Re: [agi] The Necessity of Embodiment

2008-08-08 Thread Ben Goertzel
; > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> >

Re: [agi] Human experience

2008-08-08 Thread Ben Goertzel
>> >> -- I see that simple embodiment is not anywhere near enough >> to put human social contact into the reach of direct experience. >> Embodiment will help AGI understand "chair" and "table"; >> it will not help it understand vindictiveness, slander. >> > > True > Well, maybe I spoke too

Re: [agi] Human experience

2008-08-08 Thread Ben Goertzel
> > > -- I don't see what benefit embodiment brings to the creation > of an agi scientist/engineer, whereas reading is critical. >Mechanical awareness -- not so much -- AGI could have >"immediate" mechanical awareness of not just 3D, but also >4D, 5D, etc. spaces. I feel that simple

Re: [agi] The Necessity of Embodiment

2008-08-08 Thread Ben Goertzel
t; RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "No

Re: [agi] Probabilistic Inductive Logic Programming and PLN

2008-08-07 Thread Ben Goertzel
now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED]

Re: [agi] Probabilistic Inductive Logic Programming and PLN

2008-08-05 Thread Ben Goertzel
On Tue, Aug 5, 2008 at 7:45 AM, YKY (Yan King Yin) < [EMAIL PROTECTED]> wrote: > > On 8/5/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > Yes, but in PLN/ OpenCogPrime backward chaining *can* create hypothetical > logical relationships and then seek to estimate their tr

Re: [agi] aversion to philosophy

2008-08-05 Thread Ben Goertzel
n > > > > > > > > > > --- > > agi > > Archives: https://www.listbox.com/member/archive/303/=now > > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > > Modify Your Subscription: > > https://www.listbox.com/member/?&; > > Powered by Li

Re: [agi] Groundless reasoning

2008-08-04 Thread Ben Goertzel
> When do you think Novamente will be ready to "go out" and effectively > learn from (/interract with) environments not fully controlled by the > dev team? > > I wish I could say tomorrow, but realistically it looks like it's gonna be 2009 ... hopefully earlier rather than later in the year but I'

Re: [agi] Groundless reasoning

2008-08-04 Thread Ben Goertzel
-- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- B

Re: [agi] Groundless reasoning

2008-08-04 Thread Ben Goertzel
t; Brad > > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbo

Re: [agi] Probabilistic Inductive Logic Programming and PLN

2008-08-04 Thread Ben Goertzel
ata, and asked it to guess what > the next item in the series would be, what sort of process would it > employ? > > Thanks, > --Abram Demski > > On 8/4/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > On Mon, Aug 4, 2008 at 6:10 PM, YKY (Yan King Yin) < > > [EM

Re: [agi] Groundless reasoning

2008-08-04 Thread Ben Goertzel
; > > ------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://

Re: [agi] Probabilistic Inductive Logic Programming and PLN

2008-08-04 Thread Ben Goertzel
On Mon, Aug 4, 2008 at 6:10 PM, YKY (Yan King Yin) < [EMAIL PROTECTED]> wrote: > On 8/5/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > > As noted there, my impression is that PILP could be implemented within > OpenCog's PLN backward chainer (currently bei

[agi] Probabilistic Inductive Logic Programming and PLN

2008-08-04 Thread Ben Goertzel
OpenCog by Joel Pitt, from the Novamente internal codebase) via writing a special scoring function ... -- Ben G -- Forwarded message ------ From: Ben Goertzel <[EMAIL PROTECTED]> Date: Fri, Jun 6, 2008 at 9:27 AM Subject: Re: logical implication (was: modus ponens) To: "Y

Re: [agi] META Killing threads that are implicitly critical of the list owner

2008-08-04 Thread Ben Goertzel
; RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] &qu

KILLTHREAD ... Re: Some statistics on Loosemore's rudeness [WAS Re: [agi] EVIDENCE RICHARD ...]

2008-08-04 Thread Ben Goertzel
riously analyzed every one of >>>> them. >>>> >>> This is off the cuff but, Richard, if 1/8 of your messages included >>> the word 'stupid', this could explain why the general consensus is >>> that you've been insulting people&#x

Re: Some statistics on Loosemore's rudeness [WAS Re: [agi] EVIDENCE RICHARD ...]

2008-08-03 Thread Ben Goertzel
ething like "That is > just one example of how he pulls conclusions out of thin air. The first time > I read this paper I found the whole thing too ridiculous to read after the > first few times this happened", this behavior of mine is just as disgraceful > as comments directed str

Re: A Complexity Challange! [WAS Re: [agi] The exact locus of the supposed 'complexity']

2008-08-03 Thread Ben Goertzel
gt; So there you go. As you say, the challenge is to do this and then give > reasons why the rules were picked, and also to do a comparison with chosing > rules at random. > > If people find it really too difficult to get the frequency ratio, I'd be > happy enough to see j

Re: [agi] Any further comments from lurkers??? [WAS do we need a stronger "politeness code" on this list?]

2008-08-03 Thread Ben Goertzel
--- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.listbox.com/member/?&;>Your Subscription > <http://www.listbox.com> > -- Ben Goertzel, PhD CEO,

Re: FORA versus EMAIL LISTS ... was [agi] META: do we need a stronger "politeness code" on this list?

2008-08-03 Thread Ben Goertzel
On Sun, Aug 3, 2008 at 11:05 AM, Derek Zahn <[EMAIL PROTECTED]> wrote: > I personally think that mailing lists were a decent medium for > conversations in 1990, but forums are much better -- easily available > historical context for a conversation, searchable topic history, and so on. > I think yo

Re: [agi] META: do we need a stronger "politeness code" on this list?

2008-08-03 Thread Ben Goertzel
; https://www.listbox.com/member/?&; > > f491a0 > > Powered by Listbox: http://www.listbox.com > > > > ------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303

[agi] Any further comments from lurkers??? [WAS do we need a stronger "politeness code" on this list?]

2008-08-03 Thread Ben Goertzel
pointers to some of the > theories that get repeated endlessly, together with encouragement to the > posters to just post the FAQ's URL rather than repeating the entire theory, > might reduce the repetition. (Wasn't there a wiki area exactly for that > started a while ago?) >

Re: [agi] Conway's Game of Life

2008-08-03 Thread Ben Goertzel
On Sun, Aug 3, 2008 at 7:21 AM, Mike Tintner <[EMAIL PROTECTED]>wrote: > BenI think that an engineering based approach will succeed first, just as > we succeeded in building airplanes first, rather than evolving a birdlike > flying machine out of a prebiotic molecular soup... > Ben, > > You've go

Re: [agi] Definition of Pattern?

2008-08-03 Thread Ben Goertzel
www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC and Biomind LLC Director of Research, SIAI [EMAIL PROTECTED] "Nothing will ever be attempted if all possible objections must be first overcome " - Dr Samuel J

FORA versus EMAIL LISTS ... was [agi] META: do we need a stronger "politeness code" on this list?

2008-08-03 Thread Ben Goertzel
oid "me too" posts -- but for those who felt my last >> e-mail was too long, this is the essence of my argument (and very well >> expressed). >> >> - Original Message - From: "Vladimir Nesov" <[EMAIL PROTECTED]> >> To: >> Sent:

Re: [agi] META: do we need a stronger "politeness code" on this list?

2008-08-03 Thread Ben Goertzel
s -- but for those who felt my last > e-mail was too long, this is the essence of my argument (and very well > expressed). > > - Original Message - From: "Vladimir Nesov" <[EMAIL PROTECTED]> > To: > Sent: Sunday, August 03, 2008 8:25 AM > Subject: Re: [agi]

Re: [agi] META: do we need a stronger "politeness code" on this list?

2008-08-03 Thread Ben Goertzel
46 AM, Richard Loosemore <[EMAIL PROTECTED]>wrote: > Ben Goertzel wrote: > >> >> I think Ed's email was a bit harsh, but not as harsh as many of Richard's >> (which are frequently full of language like "fools", "rubbish" and so forth &g

[agi] META: do we need a stronger "politeness code" on this list?

2008-08-02 Thread Ben Goertzel
he > started except to shout down Richard's criticisms. Personally, I have given > up on posting content to this list. Some moderation is strongly suggested. > If it includes banning me -- so be it. > > Mark > > - Original Message - > *From:* Ben Goertzel <[EMA

Re: [agi] Conway's Game of Life

2008-08-02 Thread Ben Goertzel
Hector you say In other words, there is nothing to do about AI or AGI but to look at the > systems we have already around. I do think that any of those simple systems > such as CA can achieve AGI of the kind we expect without having to do > anything else! From my point of view it is just a m

Re: [agi] EVIDENCE RICHARD DOES NOT UNDERSTAND COMPLEX SYSTEM ISSUES THAT WELL

2008-08-02 Thread Ben Goertzel
> > > I would be perfectly happy if you simply finished our discussions with a > statement that your scientific intuition tells you that the problem is not > as serious as I think. You have sometimes done this, and I have gracefully > subsided. OK. My scientific intuition tells me that the "com

Re: [agi] Conway's Game of Life

2008-08-02 Thread Ben Goertzel
I is a terribly similar problem to discovering the Game of Life... Good night... Ben On Sat, Aug 2, 2008 at 10:54 PM, Richard Loosemore <[EMAIL PROTECTED]>wrote: > Ben Goertzel wrote: > >> >> Well, there may have been a lot of trial and error in figuring out which >>

Re: [agi] EVIDENCE RICHARD DOES NOT UNDERSTAND COMPLEX SYSTEM ISSUES THAT WELL

2008-08-02 Thread Ben Goertzel
d pretty much as planned and whose >>> behavior is reasonably well understood (although not totally understood, as >>> is nothing that is truly complex in the non-Richard sense), and whose >>> overall behavior has been as chosen by design (with a little experimentation &g

Re: [agi] Conway's Game of Life

2008-08-02 Thread Ben Goertzel
ed the evidence. > > > > Richard Loosemore > > > ------- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.lis

Re: [agi] OpenCog Prime & complex systems [was MOVETHREAD ... wikibook and roadmap ...

2008-08-02 Thread Ben Goertzel
member/?&;>Your Subscription > <http://www.listbox.com> > > > -- > *agi* | Archives <https://www.listbox.com/member/archive/303/=now> > <https://www.listbox.com/member/archive/rss/303/> | > Modify<https://www.li

[agi] MOVETHREAD [ was Re: [OpenCog] Re: OpenCog Prime & complex systems [was wikibook and roadmap...]

2008-08-01 Thread Ben Goertzel
eneral Discussion List" group. > To post to this group, send email to [EMAIL PROTECTED] > To unsubscribe from this group, send email to > [EMAIL PROTECTED]<[EMAIL PROTECTED]> > For more options, visit this group at > http://groups.google.com/group/opencog?hl=en > -~--~---

[agi] OpenCog Prime wikibook and roadmap posted (moderately detailed design for an OpenCog-based thinking machine)

2008-07-30 Thread Ben Goertzel
sign up for that list and let's chat there... I'm fine to do more general discussions on this list though. thx Ben -- Forwarded message ------ From: Ben Goertzel <[EMAIL PROTECTED]> Date: Wed, Jul 30, 2008 at 10:11 PM Subject: OpenCog Prime wikibook and roadmap posted

Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Ben Goertzel
On Mon, Jul 28, 2008 at 12:14 PM, YKY (Yan King Yin) < [EMAIL PROTECTED]> wrote: > On 7/28/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > > PLN uses confidence values within its truth values, with a different > underlying semantics and math than NARS; but t

Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Ben Goertzel
On Mon, Jul 28, 2008 at 11:10 AM, YKY (Yan King Yin) < [EMAIL PROTECTED]> wrote: > On 7/28/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > > > > Your inference trajectory assumes that "cybersex" and "STD" are > probabilistically independent withi

Re: [agi] a fuzzy reasoning problem

2008-07-28 Thread Ben Goertzel
Y > > > --- > agi > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: > https://www.listbox.com/member/?&; > Powered by Listbox: http:

Re: [agi] need some help with loopy Bayes net

2008-07-04 Thread Ben Goertzel
monotonicity using > probabilistic networks? > > YKY > > > --- > agi > Archives: http://www.listbox.com/member/archive/303/=now > RSS Feed: http://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: http://www.listbox.c

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Ben Goertzel
iticality is > explained by the late Per Bak in _How Nature Works_, a short, excellent read > and an brilliant example of scientific and mathematical progress in the realm > of complexity. > > --- On Mon, 6/30/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > >> I agree that

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-30 Thread Ben Goertzel
m infinite ... the evolutionary process itself may be endlessly creative, but in that sense so may be the self-modifying process of an engineered AGI ... -- Ben G On Mon, Jun 30, 2008 at 3:17 AM, Terren Suydam <[EMAIL PROTECTED]> wrote: > > --- On Mon, 6/30/08, Ben Goertzel <[EMAIL PR

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ben Goertzel
on without doing an awful lot of > computation. > > And what is our mind but the weather in our brains? > > Terren > > --- On Sun, 6/29/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > >> From: Ben Goertzel <[EMAIL PROTECTED]> >> Subject: Re: [agi

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ben Goertzel
Richard, I think that it would be possible to formalize your "complex systems argument" mathematically, but I don't have time to do so right now. > Or, then again . perhaps I am wrong: maybe you really *cannot* > understand anything except math? It's not the case that I can only understand

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-29 Thread Ben Goertzel
> The argument itself is extremely rigorous: on all the occasions on which > someone has disputed the rigorousness of the argument, they have either > addressed some other issue entirely or they have just waved their hands > without showing any sign of understanding the argument, and then said "..

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-28 Thread Ben Goertzel
Richard, > So long as the general response to the complex systems problem is not "This > could be a serious issue, let's put our heads together to investigate it", > but "My gut feeling is that this is just not going to be a problem", or > "Quit rocking the boat!", you can bet that nobody really w

Re: [agi] WHAT SORT OF HARDWARE $33K AND $850K BUYS TODAY FOR USE IN AGI

2008-06-28 Thread Ben Goertzel
On Sat, Jun 28, 2008 at 4:13 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote: > Ed Porter wrote: >> >> I do not claim the software architecture for AGI has been totally solved. >> But I believe that enough good AGI approaches exist (and I think Novamente >> is one) that when powerful hardware avail

Re: [agi] Ideological Interactions Need to be Studied

2008-06-08 Thread Ben Goertzel
> > But enough of that, let's get to the meat of it: Are you arguing that the > function that is a neuron is not an elementary operator for whatever > computational model describes the brain? > We don't know which "function that describes a neuron" we need to use -- are Izhikevich's nonlinear dyn

Re: [agi] Ideological Interactions Need to be Studied

2008-06-08 Thread Ben Goertzel
should be possible to engineer them away on computational substrate > when we have a high-level model of what they are actually for. > > -- > Vladimir Nesov > [EMAIL PROTECTED] > > > --- > agi > Archives: http://www.listbox.com/member

Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
> Instead of talking about what you would do, do it. > > I mean, work out your ideal way to solve the questions of the mind and share > it with us after you've have found some interesting results. > > Jim Bromer > ____ > agi | Arch

Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
> While the details vary widely, Mike and I were addressing the very concept > of writing code to perform functions (e.g. "thinking") that apparently > develop on their own as emergent properties, and in the process foreclosing > on many opportunities, e.g. developing in variant ways to address pro

Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
> The truth is, one of the big problems in > the field is that nearly everyone working on a concrete AI system has > **their own** particular idea of how to do it, and wants to proceed > independently rather than compromising with others on various design > points. It's hardly a herd mentality --

Re: [agi] Pearls Before Swine...

2008-06-08 Thread Ben Goertzel
e been around as long as I have > been, and hence they certainly should know better since they have doubtless > seen many other exuberant rookies fall into similar swamps of programming > complex systems without adequate analysis. > > Hey you guys with some gray hair and/or bald spots

Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
uot; > TL: ? > FOL: woman(X) -> long_hair(X) > > I know your term logic is slightly different from Fred Sommers'. Can > you fill in the TL parts and also attach indefinite probabilities? > > On 6/3/08, Ben Goertzel <[EMAIL PROTECTED]> wrote: > >&g

Re: [agi] modus ponens

2008-06-03 Thread Ben Goertzel
t; > --- > agi > Archives: http://www.listbox.com/member/archive/303/=now > RSS Feed: http://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: http://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com &g

Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
> First of all, the *tractability* of your algorithm depends on > heuristics that you design, which are separable from the underlying > probabilistic logic calculus. In your mind, these 2 things may be > mixed up. > > Indefinite probabilities DO NOT imply faster inference. > Domain-specific heuris

Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
> > You have done something new, but not so new as to be in a totally > different dimension. > > YKY I have some ideas more like that too but I've postponed trying to sell them to others, for the moment ;-) ... it's hard enough to sell fairly basic stuff like PLN ... Look for some stuff on the ap

Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
> > As we have discussed a while back on the OpenCog mail list, I would like to > see a RDF interface to some level of the OpenCog Atom Table. I think that > would suit both YKY and myself. Our discussion went so far as to consider > ways to assign URI's to appropriate atoms. Yes, I still think

Re: [agi] More brain scanning and language

2008-06-03 Thread Ben Goertzel
ox.com/member/archive/rss/303/ >> Modify Your Subscription: http://www.listbox.com/member/?&; >> Powered by Listbox: http://www.listbox.com >> > > > > > --- > agi > Archives: http://www.listbox.com/member/archive/303/=no

Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
s now). I wonder why you don't join Stephen Reed on the texai project? Is it because you don't like the open-source nature of his project? ben On Tue, Jun 3, 2008 at 3:58 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > One thing I don't get, YKY, is why you think you are goi

Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
On Tue, Jun 3, 2008 at 3:46 PM, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote: > Hi Ben, > > Note that I did not pick FOL as my starting point because I wanted to > go against you, or be a troublemaker. I chose it because that's what > the textbooks I read were using. There is nothing personal her

Re: [agi] OpenCog's logic compared to FOL?

2008-06-03 Thread Ben Goertzel
http://www.listbox.com/member/archive/303/=now > RSS Feed: http://www.listbox.com/member/archive/rss/303/ > Modify Your Subscription: http://www.listbox.com/member/?&; > Powered by Listbox: http://www.listbox.com > -- Ben Goertzel, PhD CEO, Novamente LLC an

Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread Ben Goertzel
> More likely though, is that your algorithm is incomplete wrt FOL, ie, > there may be some things that FOL can infer but PLN can't. Either > that, or your algorithm may be actually slower than FOL. FOL is not an algorithm, it:s a representational formalism... As compared to standard logical the

Re: [agi] Uncertainty

2008-06-02 Thread Ben Goertzel
I would imagine so, but I havent thought about the details I am traveling now but will think about this when I get home and can refresh my memory by rereading the appropriate sections of Probabilistic Robotics ... ben On 6/2/08, Bob Mottram <[EMAIL PROTECTED]> wrote: > 2008/6/2 Ben

Re: [agi] OpenCog's logic compared to FOL?

2008-06-02 Thread Ben Goertzel
> I think it's fine that you use the term "atom" in your own way. The > important thing is, whatever the objects that you attach probabilities > to, that class of objects should correspond to *propositions* in FOL. > From there it would be easier for me to understand your ideas. Well, no, we atta

<    2   3   4   5   6   7   8   9   10   11   >