Re: [agi] Chaitin randomness

2007-01-19 Thread Benjamin Goertzel
Hi, > * Chaitin randomness "almost always" implies exchangeability This one comes from a paper I referenced earlier, in a post to extropy-chat > * Exchangeability "almost always" implies Chaitin randomness I'm not sure exchangeability implies Chaitin randomness. Yeah, you're right, this s

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Pei Wang
On 1/19/07, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote: The bottomline is that the knowledge acquisition project is *separable* from specific inference methods. What is your argument supporting this strong claim? I guess every book on knowledge representation includes a statement saying tha

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Mike Dougherty
On 1/19/07, Joel Pitt <[EMAIL PROTECTED]> wrote: It's been a while since I looked at Lojban or your Lojban++, so was wondering if english sentences translate well into Lojban without the sentence ordering changing? I.e. given two english sentences, are there any situations where in lojban the sen

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Joel Pitt
On 1/20/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: Regarding Mindpixel 2, FWIW, one kind of knowledge base that would be most interesting to me as an AGI developer would be a set of pairs of the form (Simple English sentence, formal representation) For instance, a [nonrepresentatively sim

[agi] Chaitin randomness

2007-01-19 Thread gts
On Fri, 19 Jan 2007 18:32:54 -0500, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: I think this topic is more appropriate for agi@v2.listbox.com Sorry, I thought that was where I was! :) Sending there now... Anyway, to respond to your point: Yep, I agree that exchangeability is different from

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/20/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: A) This is just not true, many commonsense inferences require significantly more than 5 applications of rules OK, I concur. Long inference chains are built upon short inference steps. We need a mechanism to recognize the "interestingnes

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Benjamin Goertzel
Hi, Do you think Cyc has a rule/fact like "wet things can usually conduct electricity" (or "if X is wet then X may conduct electricity")? Yes, it does... I'll also contact some Cyc folks to see if they're interested in collaborating... IMO, to have any chance of interesting them, you will

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/20/07, Stephen Reed <[EMAIL PROTECTED]> wrote: I've been using OpenCyc as the standard ontology for my texai project. OpenCyc contains only the very few rules needed to enable the OpenCyc deductive inference engine operate on its OpenCyc content. On the other hand ResearchCyc, whose licen

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Bob Mottram
On 19/01/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: B) Even if there are only 5 applications of rules, the combinatorial explosion still exists. If there are 10 rules and 1 billlion knowledge items, then there may be up to 10 billion possibilities to consider in each inference step. So t

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Benjamin Goertzel
> And, importance levels need to be context-dependent, so that assigning > them requires sophisticated inference in itself... The problem may not be so serious. Common sense reasoning may require only *shallow* inference chains, eg < 5 applications of rules. So I'm very optimistic =) Your wor

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/20/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: Backward chaining is just as susceptible to combinatorial explosions as forward chaining... And, importance levels need to be context-dependent, so that assigning them requires sophisticated inference in itself... The problem may not be

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/20/07, David Clark <[EMAIL PROTECTED]> wrote: ... Do we divine the rules/laws/algorithms from a mass of data or do we generate the appropriate conclusions when we need them because we understand how it actually works? Just as chemistry is reducible to physics, in theory, while in reality

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Benjamin Goertzel
There will come a point when integrating Cyc-type assertions into Novamente will make sense for us, and I'll be curious how useful they turn out to be at that point However, my impression is that OpenCyc's rules are not extensive enough to really add a lot to Novamente. ResearchCyc has more

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Stephen Reed
I've been using OpenCyc as the standard ontology for my texai project. OpenCyc contains only the very few rules needed to enable the OpenCyc deductive inference engine operate on its OpenCyc content. On the other hand ResearchCyc, whose licenses are available without fees for research purpose

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Benjamin Goertzel
> "More knowledge, higher intelligence" is an intuitively attractive > slogan, but has many problems in it. For example, more knowledge will > easily lead to combinatorial explosion, and the reasoning system will > derive many "true" but useless conclusions. How do you deal with that? That's the

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread David Clark
Your thermostat example can be used to show what I am talking about. The thermostat has an algorithm that says when the temperature gets below some X amount, turn on the burner and fan until the temperature rises to at least some X+N amount. You get to set the X amount. It doesn't have a table t

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/19/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: You have not explained how you will overcome the issues that plagued GOFAI, such as -- the need for massive amounts of highly uncertain background knowledge to make real-world commonsense inferences Precisely, we need to amass millions of

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Charles D Hixson
David Clark wrote: I agree with Ben's post that this kind a system has been tried many times and produced very little. How can a collection of "Cats have claws; Kitty is a cat; therefore Kitty has claws." relate cat and kitty and that kitty is slang and normally used for a young cat. A databa

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Pei Wang
On 1/19/07, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote: > "More knowledge, higher intelligence" is an intuitively attractive > slogan, but has many problems in it. For example, more knowledge will > easily lead to combinatorial explosion, and the reasoning system will > derive many "true" but

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Charles D Hixson
YKY (Yan King Yin) wrote: ... I think a project like this one requires substantial efforts, so people would need to be paid to do some of the work (programming, interface design, etc), especially if we want to build a high quality knowledgebase. If we make it free then a likely outcome is t

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Bob Mottram
On 19/01/07, YKY (Yan King Yin) <[EMAIL PROTECTED]> wrote: How about this: the database would be open for anyone to download, for experimentation or whatever purpose. Only when someone wants to incorporate the data in an AGI, would a license fee be needed. Also I would make the inference eng

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/19/07, Pei Wang <[EMAIL PROTECTED]> wrote: For example, what you called "rule" in your postings have two different meanings: (1) A declarative implication statement, "X ==> Y"; (2) A procedure that produces conclusions from premises, "{X} |- Y". These two are related, but not the same thi

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/19/07, Bob Mottram <[EMAIL PROTECTED]> wrote: My feeling is that this probably isn't a great business idea. I think collecting common sense data and building that into a general reasoner should really be thought of as a long term effort, which is unlikely to appeal to business investors e

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Benjamin Goertzel
Regarding Mindpixel 2, FWIW, one kind of knowledge base that would be most interesting to me as an AGI developer would be a set of pairs of the form (Simple English sentence, formal representation) For instance, a [nonrepresentatively simple] piece of knowledge might be (Cats often chase mice,

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Benjamin Goertzel
YKY, Pei's attitude is pretty similar to mine on these matters, although we differ on other more detailed issues regarding AGI. And, please note that compared to most AI researchers, Pei and I would be among the folks most likely to be sympathetic to your ideas, given that -- we are both explic

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Pei Wang
YKY, Frankly, I still see many conceptual confusions in your description. Of course, some of them come from other people's mistake, but they will hurt your work anyway. For example, what you called "rule" in your postings have two different meanings: (1) A declarative implication statement, "X

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread Bob Mottram
My feeling is that this probably isn't a great business idea. I think collecting common sense data and building that into a general reasoner should really be thought of as a long term effort, which is unlikely to appeal to business investors expecting to see a return within a few years. If any a

Re: [agi] Project proposal: MindPixel 2

2007-01-19 Thread YKY (Yan King Yin)
On 1/19/07, Benjamin Goertzel <[EMAIL PROTECTED]> wrote: Well YKY, I don't feel like rehashing these ancient arguments on this list!! Others are welcome to do so, if they wish... ;-) You are welcome to repeat the mistakes of the past if you like, but I frankly consider it a waste of effort.