[agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Ben Goertzel
Hi all, I intend to submit the following paper to JAGI shortly, but I figured I'd run it past you folks on this list first, and incorporate any useful feedback into the draft I submit This is an attempt to articulate a virtual world infrastructure that will be adequate for the development of huma

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Eric Burton
Goertzel this is an interesting line of investigation. What about in world sound perception? On 1/9/09, Ben Goertzel wrote: > Hi all, > > I intend to submit the following paper to JAGI shortly, but I figured > I'd run it past you folks on this list first, and incorporate any > useful feedback int

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Ben Goertzel
It's actually mentioned there, though not emphasized... there's a section on senses... ben g On Fri, Jan 9, 2009 at 8:10 PM, Eric Burton wrote: > Goertzel this is an interesting line of investigation. What about in > world sound perception? > > On 1/9/09, Ben Goertzel wrote: >> Hi all, >> >> I

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-09 Thread Ronald C. Blue
Not really related to your topic, but it sort of isMany years ago Disney made a movie about an alien cat that was telepathic and came to earth in a Flying saucer. A stupid movie because cats can not develop the technology to do this. Recently I realized that while cat can not do this a

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Nathan Cook
What about vibration? We have specialized mechanoreceptors to detect vibration (actually vibration and pressure - presumably there's processing to separate the two). It's vibration that lets us feel fine texture, via the stick-slip friction between fingertip and object. On a related note, even a v

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Ben Goertzel
On Sat, Jan 10, 2009 at 4:27 PM, Nathan Cook wrote: > What about vibration? We have specialized mechanoreceptors to detect > vibration (actually vibration and pressure - presumably there's processing > to separate the two). It's vibration that lets us feel fine texture, via the > stick-slip fricti

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Lukasz Stafiniak
On Sat, Jan 10, 2009 at 11:02 PM, Ben Goertzel wrote: >> On a related note, even a very fine powder of very low friction feels >> different to water - how can you capture the sensation of water using beads >> and blocks of a reasonably large size? > > The objective of a CogDevWorld such as BlocksN

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Nathan Cook
2009/1/10 Lukasz Stafiniak : > On Sat, Jan 10, 2009 at 11:02 PM, Ben Goertzel wrote: >>> On a related note, even a very fine powder of very low friction feels >>> different to water - how can you capture the sensation of water using beads >>> and blocks of a reasonably large size? >> >> The object

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Ben Goertzel
> The model feels underspecified to me, but I'm OK with that, the ideas > conveyed. It doesn't feel fair to insist there's no fluid dynamics > modeled though ;-) Yes, the next step would be to write out detailed equations for the model. I didn't do that in the paper because I figured that would b

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-10 Thread Ronald C. Blue
molecule. - Original Message - From: Nathan Cook To: agi@v2.listbox.com Sent: Saturday, January 10, 2009 4:27 PM Subject: Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It? What about vibration? We have specialized mechanoreceptors to detect

RE: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-11 Thread Benjamin Johnston
-Ben -Original Message- From: Ben Goertzel [mailto:b...@goertzel.org] Sent: Saturday, 10 January 2009 9:58 AM To: agi@v2.listbox.com Subject: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It? Hi all, I intend to submit the following paper to JAGI shortly, but

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-11 Thread Ben Goertzel
> > I outlined the basic principle in this paper: > http://www.comirit.com/papers/commonsense07.pdf > Since then, I've changed some of the details a bit (some were described in > my AGI-08 paper), added convex hulls and experimented with more laws of > physics; but the bas

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-11 Thread Linas Vepstas
2009/1/10 Nathan Cook : > What about vibration? We have specialized mechanoreceptors to detect > vibration (actually vibration and pressure - presumably there's processing > to separate the two). It's vibration that lets us feel fine texture, via the > stick-slip friction between fingertip and obje

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-11 Thread Ben Goertzel
Linas, I wrote a paper once speculating about quantum minds -- minds with sensors directly at the quantum level ... I am sure they would develop radically different cognitive structures than ours, perhaps including doing reasoning using quantum logic and quantum probability theory ... which would

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-12 Thread Russell Wallace
I think this sort of virtual world is an excellent idea. I agree with Benjamin Johnston's idea of a unified object model where everything consists of beads. I notice you mentioned distributing the computation. This would certainly be valuable in the long run, but for the first version I would sug

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-12 Thread Ben Goertzel
The problem with simulations that run slower than real time is that they aren't much good for running AIs interactively with humans... and for AGI we want the combination of social and physical interaction However, I agree that for an initial prototype implementation of bead physics that would be

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-12 Thread Benjamin Johnston
I think this sort of virtual world is an excellent idea. I agree with Benjamin Johnston's idea of a unified object model where everything consists of beads. I notice you mentioned distributing the computation. This would certainly be valuable in the long run, but for the first version I would

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-12 Thread Russell Wallace
On Tue, Jan 13, 2009 at 1:22 AM, Benjamin Johnston wrote: > Actually, I think it would be easier, more useful and more portable to > distribute the computation rather than trying to make it to run on a GPU. If it would be easier, fair enough; I've never programmed a GPU, I don't really know how d

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread William Pearson
2009/1/9 Ben Goertzel : > This is an attempt to articulate a virtual world infrastructure that > will be adequate for the development of human-level AGI > > http://www.goertzel.org/papers/BlocksNBeadsWorld.pdf goertzel.org seems to be down. So I can't refresh my memory of the paper. > Most of the

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Yes, I'm expecting the AI to make tools from blocks and beads No, i'm not attempting to make a detailed simulation of the human brain/body, just trying to use vaguely humanlike embodiment and high-level mind-architecture together with computer science algorithms, to achieve AGI On Tue, Jan 13, 20

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread William Pearson
2009/1/13 Ben Goertzel : > Yes, I'm expecting the AI to make tools from blocks and beads > > No, i'm not attempting to make a detailed simulation of the human > brain/body, just trying to use vaguely humanlike embodiment and > high-level mind-architecture together with computer science > algorithms

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Hi, > Since I can now get to the paper some further thoughts. Concepts that > would seem hard to form in your world is organic growth and phase > changes of materials. Also naive chemistry would seem to be somewhat > important (cooking, dissolving materials, burning: these are things > that a pre-

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Russell Wallace
Melting and boiling at least should be doable: assign every bead a temperature, and let solid interbead bonds turn liquid above a certain temperature and disappear completely above some higher temperature. --- agi Archives: https://www.listbox.com/member/ar

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Russell Wallace
And it occurs to me you could even have fire. Let fire be an element, whose beads have negative gravitational mass. Beads of fuel elements like wood have a threshold temperature above which they will turn into fire beads, with release of additional heat. --

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Indeed... but cake-baking just won't have the same nuances ;-) On Tue, Jan 13, 2009 at 10:08 AM, Russell Wallace wrote: > Melting and boiling at least should be doable: assign every bead a > temperature, and let solid interbead bonds turn liquid above a certain > temperature and disappear comple

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Russell Wallace
Yeah :-) though boiling an egg by putting it in a pot of boiling water, that much I think should be doable. On Tue, Jan 13, 2009 at 3:41 PM, Ben Goertzel wrote: > Indeed... but cake-baking just won't have the same nuances ;-) > > On Tue, Jan 13, 2009 at 10:08 AM, Russell Wallace > wrote: >> Mel

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Philip Hunt
2009/1/12 Ben Goertzel : > The problem with simulations that run slower than real time is that > they aren't much good for running AIs interactively with humans... and > for AGI we want the combination of social and physical interaction There's plenty you can do with real-time interaction. OTOH,

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Philip Hunt
2009/1/9 Ben Goertzel : > Hi all, > > I intend to submit the following paper to JAGI shortly, but I figured > I'd run it past you folks on this list first, and incorporate any > useful feedback into the draft I submit Perhaps the paper could go into more detail about what sensory input the AGI wou

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
Actually, I view that as a matter for the AGI system, not the world. Different AGI systems hooked up to the same world may choose to receive different inputs from it Binocular vision, for instance, is not necessary in a virtual world, and some AGIs might want to use it whereas others don't... On

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Matt Mahoney
> Subject: [agi] What Must a World Be That a Humanlike Intelligence May Develop > In It? > To: agi@v2.listbox.com > Date: Friday, January 9, 2009, 5:58 PM > Hi all, > > I intend to submit the following paper to JAGI shortly, but > I figured > I'd run it past you folk

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Ben Goertzel
; that. Simple problems have simple solutions, but that's not AGI. > > -- Matt Mahoney, matmaho...@yahoo.com > > > --- On Fri, 1/9/09, Ben Goertzel wrote: > >> From: Ben Goertzel >> Subject: [agi] What Must a World Be That a Humanlike Intelligence May >> Dev

Re: [agi] What Must a World Be That a Humanlike Intelligence May Develop In It?

2009-01-13 Thread Matt Mahoney
--- On Tue, 1/13/09, Ben Goertzel wrote: > The complexity of a simulated environment is tricky to estimate, if > the environment contains complex self-organizing dynamics, random > number generation, and complex human interactions ... In fact it's not computable. But if you write 10^6 bits of co