RE: [agi] Unification by index?

2008-11-02 Thread Benjamin Johnston
In classical logic programming, there is the concept of unification, ... It seems to me that by appropriate use of indexes, it should be possible to unify against the entire database simultaneously, or at least to isolate a small fraction of it as potential matches so that the individual

RE: [agi] the universe is computable [Was: Occam's Razor and its abuse]

2008-11-02 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Thu, 10/30/08, John G. Rose [EMAIL PROTECTED] wrote: You can't compute the universe within this universe because the computation would have to include itself. Exactly. That is why our model of physics must be probabilistic

RE: [agi] Cloud Intelligence

2008-11-02 Thread John G. Rose
From: Matt Mahoney [mailto:[EMAIL PROTECTED] --- On Thu, 10/30/08, John G. Rose [EMAIL PROTECTED] wrote: From: Matt Mahoney [mailto:[EMAIL PROTECTED] Cloud computing is compatible with my proposal for distributed AGI. It's just not big enough. I would need 10^10 processors, each

RE: [agi] Cloud Intelligence

2008-11-02 Thread Matt Mahoney
--- On Sun, 11/2/08, John G. Rose [EMAIL PROTECTED] wrote: Still though I don't agree on your initial numbers estimate for AGI. A bit high perhaps? Your numbers may be able to be trimmed down based on refined assumptions. True, we can't explain why the human brain needs 10^15 synapses to

[agi] Ben and Cassio quoted in Huffington Post Article

2008-11-02 Thread Ed Porter
Congratulations to two contributors to this list, Cassio Pennachin and Ben Goertzel, for being quoted in an article on Huffington Post, entitled Man Versus Machine about the role of computers in the recent financial crisis. The article is at

[agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Nathan Cook
This article (http://www.sciam.com/article.cfm?id=defining-evil) about a chatbot programmed to have an 'evil' intentionality, from Scientific American, may be of some interest to this list. Reading the researcher's personal and laboratory websites (http://www.rpi.edu/~brings/ ,

Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Trent Waddington
On Mon, Nov 3, 2008 at 7:17 AM, Nathan Cook [EMAIL PROTECTED] wrote: This article (http://www.sciam.com/article.cfm?id=defining-evil) about a chatbot programmed to have an 'evil' intentionality, from Scientific American, may be of some interest to this list. Reading the researcher's personal

Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Bob Mottram
http://kryten.mm.rpi.edu/PRES/SYNCHARIBM0807/sb_ka_etal_cogrobustsynchar_082107v1.mov --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/ Modify Your Subscription:

Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Mark Waser
I've noticed lately that the paranoid fear of computers becoming intelligent and taking over the world has almost entirely disappeared from the common culture. Is this sarcasm, irony, or are you that unaware of current popular culture (i.e. Terminator Chronicles on TV, a new Terminator movie

Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Trent Waddington
On Mon, Nov 3, 2008 at 6:56 AM, Mark Waser [EMAIL PROTECTED] wrote: Is this sarcasm, irony, or are you that unaware of current popular culture (i.e. Terminator Chronicles on TV, a new Terminator movie in the works, I, Robot, etc.)? The quote is from the early 80s.. pre-Terminator hysteria.

Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Trent Waddington
On Mon, Nov 3, 2008 at 7:50 AM, Bob Mottram [EMAIL PROTECTED] wrote: http://kryten.mm.rpi.edu/PRES/SYNCHARIBM0807/sb_ka_etal_cogrobustsynchar_082107v1.mov Is it just me or is that mov broken? The slides don't update, the audio is clipping, etc. Interesting that they're using Piaget tasks in

Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Ben Goertzel
Hi, I know Selmer Bringsjord (the leader of this project) and his work fairly well. He's an interesting guy and I'm afraid to misrepresent his views somehow in a brief summary. But I'll try. First, an interesting point is that Selmer does not believe strong AI is possible on traditional

Re: [agi] Ben and Cassio quoted in Huffington Post Article

2008-11-02 Thread Ben Goertzel
Cassio has an MBA as well as being an AI guy ... and yah, we've done a lot of computational finance together Of course, the reporter left out the more interesting things I said to him in our discussion ... and, the same is probably the case for most of the other interviews he did. It would be

Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Trent Waddington
On Mon, Nov 3, 2008 at 1:22 PM, Ben Goertzel [EMAIL PROTECTED] wrote: So, yes, his stuff is not ELIZA-like, it's based on a fairly sophisticated crisp-logic-theorem-prover back end, and a well-thought-out cognitive architecture. From what I saw in the presentation, it looks like this is an

Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Ben Goertzel
In terms of MMOs, I suppose you could think of Selmer's approach as allowing scripting in a highly customized variant of Prolog ... which might not be a bad thing, but is different from creating learning systems.. -- Ben G On Sun, Nov 2, 2008 at 10:51 PM, Trent Waddington [EMAIL PROTECTED]

Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Steve Richfield
Ben, On 11/2/08, Ben Goertzel [EMAIL PROTECTED] wrote: First, an interesting point is that Selmer does not believe strong AI is possible on traditional digital computers. Possibly related to this is that he is a serious Christian theological thinker. Taking off my AGI hat and putting on my

Re: [agi] An AI with, quote, 'evil' cognition.

2008-11-02 Thread Trent Waddington
On Mon, Nov 3, 2008 at 4:50 PM, Steve Richfield [EMAIL PROTECTED] wrote: Taking off my AGI hat and putting on my Simulated Christian hat for a moment... Must you? Trent --- agi Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: