RE: [agi] AGI and Deity

2007-12-20 Thread Ed Porter
Stan Your web page's major argument against strong AGI seems to be the following: Limits to Intelligence ... Formal Case ...because intelligence is the process of making choices, and choices are a function of models. Models will not be perfect. Both man and

Re: [agi] AGI and Deity

2007-12-20 Thread Stan Nilsen
Ed, I agree that machines will be faster and may have something equivalent to the trillions of synapses in the human brain. It isn't the modeling device that limits the level of intelligence, but rather what can be effectively modeled. Effectively meaning what can be used in a real time

Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote: Ed, I agree that machines will be faster and may have something equivalent to the trillions of synapses in the human brain. It isn't the modeling device that limits the level of intelligence, but rather what can be effectively modeled.

RE: [agi] AGI and Deity

2007-12-20 Thread Ed Porter
Stan, You wrote It isn't the modeling device that limits the level of intelligence, but rather what can be effectively modeled. Effectively meaning what can be used in a real time judgment system. The type of AGI's I have been talking about will be able to use their much more complete and

Re: Re : [agi] List of Java AI tools libraries

2007-12-20 Thread Charles D Hixson
Bruno Frandemiche wrote: Psyclone AIOS http://www.cmlabs.com/psyclone/™ is a powerful platform for building complex automation and autonomous systems I couldn't seem to find what license that was released under. (The library was LGPL, which is very nice.) But without knowing the license,

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Stan Nilsen
Matt, Thanks for the links sent earlier. I especially like the paper by Legg and Hutter regarding measurement of machine intelligence. The other paper I find difficult, probably it's deeper than I am. comment on two things: 1) The response Intelligence has nothing to do with

Re: [agi] AGI and Deity

2007-12-20 Thread j.k.
On 12/20/2007 09:18 AM,, Stan Nilsen wrote: I agree that machines will be faster and may have something equivalent to the trillions of synapses in the human brain. It isn't the modeling device that limits the level of intelligence, but rather what can be effectively modeled. Effectively

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Matt Mahoney
--- Stan Nilsen [EMAIL PROTECTED] wrote: Matt, Thanks for the links sent earlier. I especially like the paper by Legg and Hutter regarding measurement of machine intelligence. The other paper I find difficult, probably it's deeper than I am. The AIXI paper is essentially a proof of

[agi] NL interface

2007-12-20 Thread YKY (Yan King Yin)
I'm planning to write an NL interface that uses templates to eliminate parsing and thus achieve 100% accuracy for a restricted subset of English (for example, asking the user to disambiguate parts of speech, syntax etc). It seems that such a program doesn't exist yet. It looks like AGI-level NLP

Re: [agi] NL interface

2007-12-20 Thread Stephen Reed
Hi YKY, I hope that by this time next year the Texai project will have a robust English parser suitable for your project. I am working in collaboration with the Air Force Research Laboratory's Synthetic Teammate Project

Re: [agi] AGI and Deity

2007-12-20 Thread Stan Nilsen
j.k. I understand that it's all uphill to defy the obvious. For the record, today I do believe that intelligence way beyond human intelligence is not possible. There are elements of your response that trouble me as in rock my boat. I appreciate being rocked and will give this more thought.

Re: [agi] NL interface

2007-12-20 Thread YKY (Yan King Yin)
On 12/21/07, Stephen Reed [EMAIL PROTECTED] wrote: Hi YKY, I hope that by this time next year the Texai project will have a robust English parser suitable for your project. I am working in collaboration with the Air Force Research Laboratory's Synthetic Teammate Project

How an AGI would be [WAS Re: [agi] AGI and Deity]

2007-12-20 Thread Richard Loosemore
j.k. wrote: On 12/20/2007 09:18 AM,, Stan Nilsen wrote: I agree that machines will be faster and may have something equivalent to the trillions of synapses in the human brain. It isn't the modeling device that limits the level of intelligence, but rather what can be effectively modeled.

Re: Possibility of superhuman intelligence (was Re: [agi] AGI and Deity)

2007-12-20 Thread Richard Loosemore
Matt Mahoney wrote: --- Stan Nilsen [EMAIL PROTECTED] wrote: Matt, Thanks for the links sent earlier. I especially like the paper by Legg and Hutter regarding measurement of machine intelligence. The other paper I find difficult, probably it's deeper than I am. The AIXI paper is

Re: [agi] NL interface

2007-12-20 Thread Stephen Reed
- Original Message From: YKY (Yan King Yin) [EMAIL PROTECTED] To: agi@v2.listbox.com Sent: Thursday, December 20, 2007 9:41:10 PM Subject: Re: [agi] NL interface On 12/21/07, Stephen Reed [EMAIL PROTECTED] wrote: Hi YKY, I hope that by this time next year the Texai project will

Re: [agi] AGI and Deity

2007-12-20 Thread j.k.
Hi Stan, On 12/20/2007 07:44 PM,, Stan Nilsen wrote: I understand that it's all uphill to defy the obvious. For the record, today I do believe that intelligence way beyond human intelligence is not possible. I understand that this is your belief. I was trying to challenge you to make a strong

Re: How an AGI would be [WAS Re: [agi] AGI and Deity]

2007-12-20 Thread j.k.
On 12/20/2007 07:56 PM,, Richard Loosemore wrote: I think these are some of the most sensible comments I have heard on this list for a while. You are not saying anything revolutionary, but it sure is nice to hear someone holding out for common sense for a change! Basically your point is that