Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Richard Loosemore
Bryan Bishop wrote: On Wednesday 14 November 2007 11:55, Richard Loosemore wrote: I was really thinking of the data collection problem: we cannot take one brain and get full information about all those things, down to a sufficient level of detail. I do not see such a technology even over the h

[agi] Polyworld: Using Evolution to Design Artificial Intelligence

2007-11-14 Thread Jef Allbright
This may be of interest to the group. This presentation is about a potential shortcut to artificial intelligence by trading mind-design for world-design using artificial evolution. Evolutionary algorithms are a pump for turning CPU cy

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore
Mike Tintner wrote: RL:In order to completely ground the system, you need to let the system build its own symbols V. much agree with your whole argument. But - & I may well have missed some vital posts - I have yet to get the slightest inkling of how you yourself propose to do this. Well, fo

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Mike Tintner
Sounds a little confusing. Sounds like you plan to evolve a system through testing "thousands of candidate mechanisms." So one way or another you too are taking a view - even if it's an evolutionary, "I'm not taking a view" view - on, and making a lot of asssumptions about -how systems evol

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Russell Wallace
On Nov 14, 2007 11:58 PM, Bryan Bishop <[EMAIL PROTECTED]> wrote: > Are we sure? How much of the real world are we able to get into our AGI > models anyway? Bandwidth is limited, much more limited than in humans > and other animals. In fact, it might be the equivalent to worm tech. > > To do the ca

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore
Bryan Bishop wrote: On Wednesday 14 November 2007 11:28, Richard Loosemore wrote: The complaint is not "your symbols are not connected to experience". Everyone and their mother has an AI system that could be connected to real world input. The simple act of connecting to the real world is NOT th

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Bryan Bishop
On Wednesday 14 November 2007 11:28, Richard Loosemore wrote: > The complaint is not "your symbols are not connected to experience". > Everyone and their mother has an AI system that could be connected to > real world input.  The simple act of connecting to the real world is > NOT the core problem.

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Bryan Bishop
On Wednesday 14 November 2007 11:55, Richard Loosemore wrote: > I was really thinking of the data collection problem:  we cannot take > one brain and get full information about all those things, down to a > sufficient level of detail.  I do not see such a technology even over > the horizon (short o

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
On Nov 14, 2007 1:36 PM, Mike Tintner <[EMAIL PROTECTED]> wrote: > RL:In order to completely ground the system, you need to let the system > build its own symbols Correct. Novamente is designed to be able to build its own symbols. what is built-in, are mechanisms for building symbols, and for

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Mike Tintner
RL:In order to completely ground the system, you need to let the system build its own symbols V. much agree with your whole argument. But - & I may well have missed some vital posts - I have yet to get the slightest inkling of how you yourself propose to do this. - This list is sponsore

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Richard Loosemore
Bryan Bishop wrote: On Tuesday 13 November 2007 09:11, Richard Loosemore wrote: This is the whole brain emulation approach, I guess (my previous comments were about evolution of brains rather than neural level duplication). Ah, you are right. But this too is an interesting topic. I think that

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore
Benjamin Goertzel wrote: Hi, No: the real concept of "lack of grounding" is nothing so simple as the way you are using the word "grounding". Lack of grounding makes an AGI fall flat on its face and not work. I can't summarize the grounding literature in one post. (Though

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Richard, > > So here I am, looking at this situation, and I see: > > AGI system intepretation (implicit in system use of it) > Human programmer intepretation > > and I ask myself which one of these is the real interpretation? > > It matters, because they do not necessarily match up.

Re: Introducing Autobliss 1.0 (was RE: [agi] Nirvana? Manyana? Never!)

2007-11-14 Thread Richard Loosemore
Matt Mahoney wrote: --- Richard Loosemore <[EMAIL PROTECTED]> wrote: Matt Mahoney wrote: --- Jiri Jelinek <[EMAIL PROTECTED]> wrote: On Nov 11, 2007 5:39 PM, Matt Mahoney <[EMAIL PROTECTED]> wrote: We just need to control AGIs goal system. You can only control the goal system of the first

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore
Benjamin Goertzel wrote: On Nov 13, 2007 2:37 PM, Richard Loosemore <[EMAIL PROTECTED] > wrote: Ben, Unfortunately what you say below is tangential to my point, which is what happens when you reach the stage where you cannot allow any more vaguenes

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Benjamin Goertzel
Hi, > > > No: the real concept of "lack of grounding" is nothing so simple as the > way you are using the word "grounding". > > Lack of grounding makes an AGI fall flat on its face and not work. > > I can't summarize the grounding literature in one post. (Though, heck, > I have actually tried t

Re: Essay - example of how the CSP bites [WAS Re: [agi] What best evidence for fast AI?]

2007-11-14 Thread Richard Loosemore
Linas Vepstas wrote: On Tue, Nov 13, 2007 at 12:34:51PM -0500, Richard Loosemore wrote: Suppose that in some significant part of Novamente there is a representation system that uses "probability" or "likelihood" numbers to encode the strength of facts, as in [I like cats](p=0.75). The (p=0.75)

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Lukasz Stafiniak
On Nov 14, 2007 3:48 PM, Edward W. Porter <[EMAIL PROTECTED]> wrote: > Lukasz, > > Which of the multiple issues that Mark listed is one of the two basic > directions you were referring to. > > Ed Porter > (First of all, I'm sorry for attaching my general remark as a reply: I was writing from a cell

RE: [agi] What best evidence for fast AI?

2007-11-14 Thread Edward W. Porter
Lukasz, Which of the multiple issues that Mark listed is one of the two basic directions you were referring to. Ed Porter -Original Message- From: Lukasz Stafiniak [mailto:[EMAIL PROTECTED] Sent: Wednesday, November 14, 2007 9:15 AM To: agi@v2.listbox.com Subject: Re: [agi] What best evi

Re: [agi] Relativistic irrationalism

2007-11-14 Thread Stefan Pernar
Pei, many thanks for your comments. Good input on rationality and AIXI. Kind regards, Stefan On Nov 14, 2007 10:13 PM, Pei Wang <[EMAIL PROTECTED]> wrote: > Stefan, > > Though I agree with most of your analysis on inter-agent relationship, > I don't share your conception of rationality. > > To

Re: [agi] What best evidence for fast AI?

2007-11-14 Thread Lukasz Stafiniak
I think that there are two basic directions to better the Novamente architecture: the one Mark talks about more integration of MOSES with PLN and RL theory On 11/13/07, Edward W. Porter <[EMAIL PROTECTED]> wrote: > Response to Mark Waser Mon 11/12/2007 2:42 PM post. > > > > >MARK Remember th

Re: [agi] Relativistic irrationalism

2007-11-14 Thread Pei Wang
Stefan, Though I agree with most of your analysis on inter-agent relationship, I don't share your conception of rationality. To me, "rationality" itself is relativistic, that is, what behavior/action is rational is always judged according to the assumptions and postulations on a system's goal, kn

Re: [agi] advice-level dev collaboration

2007-11-14 Thread Jiri Jelinek
Thanks for the responses. Sorry, I picked just a couple of folks. Dealing with the wide audience of the whole AGI list would IMO make things more difficult for me. I may share selected stuff later. Regards, Jiri Jelinek - This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscr