Stan
Your web page's major argument against strong AGI seems to be the following:
Limits to Intelligence
...
Formal Case
...because intelligence is the process of making choices, and
choices are a function of models. Models will not be perfect. Both man and
Ed,
I agree that machines will be faster and may have something equivalent
to the trillions of synapses in the human brain.
It isn't the modeling device that limits the level of intelligence,
but rather what can be effectively modeled. Effectively meaning what
can be used in a real time
--- Stan Nilsen [EMAIL PROTECTED] wrote:
Ed,
I agree that machines will be faster and may have something equivalent
to the trillions of synapses in the human brain.
It isn't the modeling device that limits the level of intelligence,
but rather what can be effectively modeled.
Stan,
You wrote It isn't the modeling device that limits the level of
intelligence, but rather what can be effectively modeled. Effectively
meaning what can be used in a real time judgment system.
The type of AGI's I have been talking about will be able to use their much
more complete and
Bruno Frandemiche wrote:
Psyclone AIOS http://www.cmlabs.com/psyclone/™ is a powerful platform
for building complex automation
and autonomous systems
I couldn't seem to find what license that was released under. (The
library was LGPL, which is very nice.)
But without knowing the license,
Matt,
Thanks for the links sent earlier. I especially like the paper by Legg
and Hutter regarding measurement of machine intelligence. The other
paper I find difficult, probably it's deeper than I am.
comment on two things:
1) The response Intelligence has nothing to do with
On 12/20/2007 09:18 AM,, Stan Nilsen wrote:
I agree that machines will be faster and may have something equivalent
to the trillions of synapses in the human brain.
It isn't the modeling device that limits the level of intelligence,
but rather what can be effectively modeled. Effectively
--- Stan Nilsen [EMAIL PROTECTED] wrote:
Matt,
Thanks for the links sent earlier. I especially like the paper by Legg
and Hutter regarding measurement of machine intelligence. The other
paper I find difficult, probably it's deeper than I am.
The AIXI paper is essentially a proof of
I'm planning to write an NL interface that uses templates to eliminate
parsing and thus achieve 100% accuracy for a restricted subset of English
(for example, asking the user to disambiguate parts of speech, syntax etc).
It seems that such a program doesn't exist yet.
It looks like AGI-level NLP
Hi YKY,
I hope that by this time next year the Texai project will have a robust English
parser suitable for your project. I am working in collaboration with the Air
Force Research Laboratory's Synthetic Teammate Project
j.k.
I understand that it's all uphill to defy the obvious. For the record,
today I do believe that intelligence way beyond human intelligence is
not possible. There are elements of your response that trouble me as in
rock my boat. I appreciate being rocked and will give this more thought.
On 12/21/07, Stephen Reed [EMAIL PROTECTED] wrote:
Hi YKY,
I hope that by this time next year the Texai project will have a robust
English parser suitable for your project. I am working in collaboration
with the Air Force Research Laboratory's Synthetic Teammate Project
j.k. wrote:
On 12/20/2007 09:18 AM,, Stan Nilsen wrote:
I agree that machines will be faster and may have something equivalent
to the trillions of synapses in the human brain.
It isn't the modeling device that limits the level of intelligence,
but rather what can be effectively modeled.
Matt Mahoney wrote:
--- Stan Nilsen [EMAIL PROTECTED] wrote:
Matt,
Thanks for the links sent earlier. I especially like the paper by Legg
and Hutter regarding measurement of machine intelligence. The other
paper I find difficult, probably it's deeper than I am.
The AIXI paper is
- Original Message
From: YKY (Yan King Yin) [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, December 20, 2007 9:41:10 PM
Subject: Re: [agi] NL interface
On 12/21/07, Stephen Reed [EMAIL PROTECTED] wrote:
Hi YKY,
I hope that by this time next year the Texai project will
Hi Stan,
On 12/20/2007 07:44 PM,, Stan Nilsen wrote:
I understand that it's all uphill to defy the obvious. For the
record, today I do believe that intelligence way beyond human
intelligence is not possible.
I understand that this is your belief. I was trying to challenge you to
make a strong
On 12/20/2007 07:56 PM,, Richard Loosemore wrote:
I think these are some of the most sensible comments I have heard on
this list for a while. You are not saying anything revolutionary, but
it sure is nice to hear someone holding out for common sense for a change!
Basically your point is that
17 matches
Mail list logo