Re: [agi] NARS: definition of intelligence

2007-05-23 Thread J. Andrew Rogers
On May 23, 2007, at 4:17 PM, Pei Wang wrote: I continued to look for a publisher with tough peer-review procedure, even after the manuscript had been rejected by more than a dozen of them. Though the price excludes most of individual buyers, it may be more likely for a research library to buy a

Re: [agi] Pure reason is a disease.

2007-05-23 Thread J Storrs Hall, PhD
On Wednesday 23 May 2007 06:34:29 pm Mike Tintner wrote: > My underlying argument, though, is that your (or any) computational model > of emotions, if it does not also include a body, will be fundamentally > flawed both physically AND computationally. Does everyone here know what an ICE is in

Re: [agi] NARS: definition of intelligence

2007-05-23 Thread Pei Wang
Shane, Well, I actually considered Lulu and similar publishers, though as the last option. It is much easier to publish with them, but given the nature of NARS, such a publisher will make the book even more likely to be classified as by a crackpot. :( I continued to look for a publisher with tou

Re: [agi] NARS: definition of intelligence

2007-05-23 Thread Shane Legg
Pei, Yes, the book is the best source for most of the topics. Sorry for the absurd price, which I have no way to influence. It's $190. Somebody is making a lot of money on each copy and I'm sure it's not you. To get a 400 page hard cover published at lulu.com is more like $25. Shane -

Re: [agi] Pure reason is a disease.

2007-05-23 Thread Mike Tintner
Eric, The point is simply that you can only fully simulate emotions with a body as well as a brain. And emotions while identified by the conscious brain are felt with the body I don't find it at all hard to understand - I fully agree - that emotions are generated as a result of computations

Re: [agi] Pure reason is a disease.

2007-05-23 Thread Mike Tintner
P.S. Eric, I haven't forgotten your question to me, & will try to address it in time - the answer is complex. - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=231415&user_secret=e9e40a

[agi] Computer explains your error by showing how it should have been done

2007-05-23 Thread Lukasz Stafiniak
For those of you interested in type-driven program synthesis: http://www.cs.washington.edu/homes/blerner/seminal.html (quick link: http://www.cs.washington.edu/homes/blerner/files/seminal-visitdays.ppt) - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change y

Re: [agi] NARS: definition of intelligence

2007-05-23 Thread Pei Wang
On 5/23/07, Derek Zahn <[EMAIL PROTECTED]> wrote: I'm planning over the course of the rest of the year to look in-depth at all of the AGI projects that include a significant implementation component (that is, those that are not just books musing about the nature of intelligence -- I am also read

Re: [agi] Pure reason is a disease.

2007-05-23 Thread Eric Baum
Mike> Eric Baum: What is Thought [claims that] feelings.are Mike> explainable by a computational model. Mike> Feelings/ emotions are generated by the brain's computations, Mike> certainly. But they are physical/ body events. Does your Turing Mike> machine have a body other than that of some kind

Re: [agi] Pure reason is a disease.

2007-05-23 Thread J. Andrew Rogers
On May 23, 2007, at 3:02 PM, Mike Tintner wrote: Feelings/ emotions are generated by the brain's computations, certainly. But they are physical/ body events. Does your Turing machine have a body other than that of some kind of computer box? And does it want to dance when it hears emotionall

Re: [agi] Pure reason is a disease.

2007-05-23 Thread Eric Baum
>> AGIs (at least those that could run on current computers) cannot >> really get excited about anything. It's like when you represent the >> pain intensity with a number. No matter how high the number goes, >> it doesn't really hurt. Real feelings - that's the key difference >> between us and the

Re: [agi] Pure reason is a disease.

2007-05-23 Thread Eric Baum
Richard> Mark Waser wrote: >> AGIs (at least those that could run on current computers) >> cannot really get excited about anything. It's like when you Richard> represent >> the pain intensity with a number. No matter how high the number Richard> goes, >> it doesn't really hurt. Real feelings - th

Re: [agi] Parsing theories

2007-05-23 Thread Lukasz Stafiniak
On 5/23/07, Mark Waser <[EMAIL PROTECTED]> wrote: systems in that there has been success in processing huge amounts (corpuses, corpi? :-) of data and producing results -- but it's *clearly* not the way corpora - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or

Re: [agi] Parsing theories

2007-05-23 Thread Mark Waser
> As I think about it, one problem is, depending on how its > parametrized, its not going to build much of a world model. > Say for example it uses trigrams. The average hs grad knows > something like 50,000 words. So there are something like 10^17 > trigrams. It will never see enough data to build

Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-23 Thread A. T. Murray
The scholar and gentleman Jean-Paul Van Belle wrote: > Universal compassion and tolerance are the ultimate > consequences of enlightenment which one Matt on the > list equated IMHO erroneously to high-orbit intelligence > methinx subtle humour is a much better proxy for intelligence > > Jean-Pau

Re: [agi] Parsing theories

2007-05-23 Thread Eric Baum
>> Also, I don't see how you can call a model "semantic" when it makes >> no reference to the world. Mark> Ah, but this is where it gets tricky. While the model makes no Mark> reference to the world, it is certainly influenced by the fact Mark> that 100% of it's data comes from the world -- whi

RE: [agi] NARS: definition of intelligence

2007-05-23 Thread Derek Zahn
Pei Wang writes: > Thanks for the interest. I'll do my best to help, though since I'm on> > vacation in China, I may not be able to process my emails as usual. Thank you for your response. I'm planning over the course of the rest of the year to look in-depth at all of the AGI projects that i

Re: [agi] Parsing theories

2007-05-23 Thread Benjamin Goertzel
Also, I don't see how you can call a model "semantic" when it makes no reference to the world. The model as described by Wikipedia could have the capability of telling me whether a sentence is natural or highly unlikely, but unless I misunderstand something, there is no possibility it could tel

Re: [agi] Pure reason is a disease.

2007-05-23 Thread Mark Waser
A meta-question here with some prefatory information . . . . The reason why I top-post (and when I do so, I *never* put content inside) is because I frequently find it *really* convenient to have the entire text of the previous message or two (no more) immediately available for reference. On

Re: [agi] Parsing theories

2007-05-23 Thread Mark Waser
I'll take a shot at answering some of your questions as someone who has done some work and research but is certainly not claiming to be an expert . . . . Wikipedia says that various quantities are "learnable" because they can in principle be determined by data. What is known about whether they

Re: [agi] Parsing theories

2007-05-23 Thread Eric Baum
A google search on "operator grammar" + trigram yields nada. A google search on "operator grammar" + bigram yields nothing interesting. I've seen papers on statistical language parsing before, including trigrams etc. Not so clear to me the extent to which they've been merged with Harris's work.

Re: [agi] Write a doctoral dissertation, trigger a Singularity

2007-05-23 Thread Jean-Paul Van Belle
Universal compassion and tolerance are the ultimate consequences of enlightenment which one Matt on the list equated IMHO erroneously to high-orbit intelligence methinx subtle humour is a much better proxy for intelligence Jean-Paul member of the 'let Murray stay' advocacy group aka 'the write 2

Re: [agi] Parsing theories

2007-05-23 Thread Jean-Paul Van Belle
Check "bigrams" (or, more interestingly, "trigrams") in computational linguistics. Department of Information Systems Email: [EMAIL PROTECTED] Phone: (+27)-(0)21-6504256 Fax: (+27)-(0)21-6502280 Office: Leslie Commerce 4.21 >>> Eric Baum <[EMAIL PROTECTED]> 2007/05/23 15:36:20 >>> One way to

Re: [agi] Pure reason is a disease.

2007-05-23 Thread Lukasz Kaiser
Hi, On 5/23/07, Mark Waser <[EMAIL PROTECTED]> wrote: - Original Message - From: "Jiri Jelinek" <[EMAIL PROTECTED]> > On 5/20/07, Mark Waser <[EMAIL PROTECTED]> wrote: >> - Original Message - >> From: "Jiri Jelinek" <[EMAIL PROTECTED]> >> > On 5/16/07, Mark Waser <[EMAIL PROTECTE

Re: [agi] Pure reason is a disease.

2007-05-23 Thread Richard Loosemore
Mark Waser wrote: AGIs (at least those that could run on current computers) cannot really get excited about anything. It's like when you represent the pain intensity with a number. No matter how high the number goes, it doesn't really hurt. Real feelings - that's the key difference between us and

Re: [agi] NARS: definition of intelligence

2007-05-23 Thread Pei Wang
On 5/22/07, Derek Zahn <[EMAIL PROTECTED]> wrote: Pei, As part of my ongoing AGI education, I am beginning to study NARS in some detail. Thanks for the interest. I'll do my best to help, though since I'm on vacation in China, I may not be able to process my emails as usual. As has been dis

Re: [agi] Parsing theories

2007-05-23 Thread Eric Baum
This is based purely on reading the wikipedia entry on Operator grammar, which I find very interesting. I'm hoping someone out there knows enough about this to answer some questions :^) Wikipedia says that various quantities are "learnable" because they can in principle be determined by data. Wh