In classical logic programming, there is the concept of unification,
...
It seems to me that by appropriate use of indexes, it should be
possible to unify against the entire database simultaneously, or
at least to isolate a small fraction of it as potential matches
so that the individual
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
--- On Thu, 10/30/08, John G. Rose [EMAIL PROTECTED] wrote:
You can't compute the universe within this universe
because the computation
would have to include itself.
Exactly. That is why our model of physics must be probabilistic
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
--- On Thu, 10/30/08, John G. Rose [EMAIL PROTECTED] wrote:
From: Matt Mahoney [mailto:[EMAIL PROTECTED]
Cloud computing is compatible with my proposal for distributed AGI.
It's just not big enough. I would need 10^10 processors, each
--- On Sun, 11/2/08, John G. Rose [EMAIL PROTECTED] wrote:
Still though I don't agree on your initial
numbers estimate for AGI. A bit
high perhaps? Your numbers may be able
to be trimmed down based on refined assumptions.
True, we can't explain why the human brain needs 10^15 synapses to
Congratulations to two contributors to this list, Cassio Pennachin and Ben
Goertzel, for being quoted in an article on Huffington Post, entitled Man
Versus Machine about the role of computers in the recent financial crisis.
The article is at
This article (http://www.sciam.com/article.cfm?id=defining-evil) about a
chatbot programmed to have an 'evil' intentionality, from Scientific
American, may be of some interest to this list. Reading the researcher's
personal and laboratory websites (http://www.rpi.edu/~brings/ ,
On Mon, Nov 3, 2008 at 7:17 AM, Nathan Cook [EMAIL PROTECTED] wrote:
This article (http://www.sciam.com/article.cfm?id=defining-evil) about a
chatbot programmed to have an 'evil' intentionality, from Scientific
American, may be of some interest to this list. Reading the researcher's
personal
http://kryten.mm.rpi.edu/PRES/SYNCHARIBM0807/sb_ka_etal_cogrobustsynchar_082107v1.mov
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
I've noticed lately that the paranoid fear of computers becoming
intelligent and taking over the world has almost entirely disappeared
from the common culture.
Is this sarcasm, irony, or are you that unaware of current popular culture
(i.e. Terminator Chronicles on TV, a new Terminator movie
On Mon, Nov 3, 2008 at 6:56 AM, Mark Waser [EMAIL PROTECTED] wrote:
Is this sarcasm, irony, or are you that unaware of current popular culture
(i.e. Terminator Chronicles on TV, a new Terminator movie in the works, I,
Robot, etc.)?
The quote is from the early 80s.. pre-Terminator hysteria.
On Mon, Nov 3, 2008 at 7:50 AM, Bob Mottram [EMAIL PROTECTED] wrote:
http://kryten.mm.rpi.edu/PRES/SYNCHARIBM0807/sb_ka_etal_cogrobustsynchar_082107v1.mov
Is it just me or is that mov broken?
The slides don't update, the audio is clipping, etc.
Interesting that they're using Piaget tasks in
Hi,
I know Selmer Bringsjord (the leader of this project) and his work fairly
well.
He's an interesting guy and I'm afraid to misrepresent his views somehow in
a brief summary. But I'll try.
First, an interesting point is that Selmer does not believe strong AI is
possible on traditional
Cassio has an MBA as well as being an AI guy ... and yah, we've done a lot
of computational finance together
Of course, the reporter left out the more interesting things I said to him
in our discussion ... and, the same is probably the case for most of the
other interviews he did. It would be
On Mon, Nov 3, 2008 at 1:22 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
So, yes, his stuff is not ELIZA-like, it's based on a fairly sophisticated
crisp-logic-theorem-prover back end, and a well-thought-out cognitive
architecture.
From what I saw in the presentation, it looks like this is an
In terms of MMOs, I suppose you could think of Selmer's approach as allowing
scripting in a highly customized variant of Prolog ... which might not be
a bad
thing, but is different from creating learning systems..
-- Ben G
On Sun, Nov 2, 2008 at 10:51 PM, Trent Waddington
[EMAIL PROTECTED]
Ben,
On 11/2/08, Ben Goertzel [EMAIL PROTECTED] wrote:
First, an interesting point is that Selmer does not believe strong AI is
possible on traditional digital computers. Possibly related to this is that
he is a serious Christian theological thinker.
Taking off my AGI hat and putting on my
On Mon, Nov 3, 2008 at 4:50 PM, Steve Richfield
[EMAIL PROTECTED] wrote:
Taking off my AGI hat and putting on my Simulated Christian hat for a
moment...
Must you?
Trent
---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed:
17 matches
Mail list logo