Re: [agi] general weak ai

2007-03-11 Thread J. Storrs Hall, PhD.
On Saturday 10 March 2007 14:36, Andrew Babian wrote: > I can't speak for Minsky, but I would wonder what advantage would there be > for having only one agent? An arbitrator. You have only one body, and it would be counterproductive for it to try to do different things at the same time. (It's c

Re: [agi] general weak ai

2007-03-10 Thread Andrew Babian
I can't speak for Minsky, but I would wonder what advantage would there be for having only one agent? I think he talks about the disadvantages. How is it going to deal with naturally different sorts of management problems and information? It seems like it's just a better aproach to have a system

Re: [agi] general weak ai

2007-03-10 Thread J. Storrs Hall, PhD.
This is one of those points on which SoM sayeth not. In my architecture, yes, there's one (or many) for each task, and they're mostly generated. What the system has ahead of time is learning biases for kinds of tasks that make it easier to learn various things. The agents that are built in are

Re: [agi] general weak ai

2007-03-09 Thread Pei Wang
On 3/9/07, Chuck Esterbrook <[EMAIL PROTECTED]> wrote: Thanks for the insight. I guess when people put their papers online they don't often bother with adding the date in and that's why I see so many without them. If they do that, the on-line version will be slightly different from the publish

Re: [agi] general weak ai

2007-03-09 Thread Chuck Esterbrook
On 3/9/07, Pei Wang <[EMAIL PROTECTED]> wrote: On 3/9/07, Chuck Esterbrook <[EMAIL PROTECTED]> wrote: > On 3/6/07, Pei Wang <[EMAIL PROTECTED]> wrote: > > A more detailed discussion is in > > http://nars.wang.googlepages.com/wang.WhatAIShouldBe.pdf > > One can usually infer the approximate date o

Re: [agi] general weak ai

2007-03-09 Thread Pei Wang
On 3/9/07, Chuck Esterbrook <[EMAIL PROTECTED]> wrote: On 3/6/07, Pei Wang <[EMAIL PROTECTED]> wrote: > A more detailed discussion is in > http://nars.wang.googlepages.com/wang.WhatAIShouldBe.pdf One can usually infer the approximate date of such a paper from the references, but not having a dat

Re: [agi] general weak ai

2007-03-09 Thread Chuck Esterbrook
On 3/6/07, Pei Wang <[EMAIL PROTECTED]> wrote: A more detailed discussion is in http://nars.wang.googlepages.com/wang.WhatAIShouldBe.pdf One can usually infer the approximate date of such a paper from the references, but not having a date still seems odd especially considering that these papers

Re: [agi] general weak ai

2007-03-09 Thread Pei Wang
Do you mean that the system has an agent for every task it is going to take? Are the agents generated by the system whenever needed, or coded by the designer in advance? Pei On 3/9/07, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: Not at all. the agent that does the pointing is just a "build

Re: [agi] general weak ai

2007-03-09 Thread Russell Wallace
On 3/9/07, Charles D Hixson <[EMAIL PROTECTED]> wrote: You aren't requesting it of the person, you're requesting it of the AI. In other words, you are insisting that the AI demonstrate more capabilities (in a restricted domain, admittedly) than an average person before you will admit that it is

Re: [agi] general weak ai

2007-03-09 Thread J. Storrs Hall, PhD.
Not at all. the agent that does the pointing is just a "build a deck" agent (or, more likely, a "society of deck") that gets activated when deck-building is the thing to do. I don't know Minsky's ultimate take on the subject, but I don't see any problem with putting one agent in charge of the w

Re: [agi] general weak ai

2007-03-09 Thread Charles D Hixson
Russell Wallace wrote: On 3/9/07, *Charles D Hixson* <[EMAIL PROTECTED] > wrote: Russell Wallace wrote: > To test whether a program understands a story, start by having it > generate an animated movie of the story. >

Re: [agi] general weak ai

2007-03-09 Thread Russell Wallace
On 3/9/07, Charles D Hixson <[EMAIL PROTECTED]> wrote: Russell Wallace wrote: > To test whether a program understands a story, start by having it > generate an animated movie of the story. > Nearly every person I know would

Re: [agi] general weak ai

2007-03-09 Thread Jef Allbright
On 3/9/07, Pei Wang <[EMAIL PROTECTED]> wrote: On 3/9/07, Jef Allbright <[EMAIL PROTECTED]> wrote: > > We seem to have skipped over my point about intelligence being about > the encoding of regularities of effective interaction of an agent with > its environment, but perhaps that is now moot. No

Re: [agi] general weak ai

2007-03-09 Thread Pei Wang
On 3/9/07, Jef Allbright <[EMAIL PROTECTED]> wrote: We seem to have skipped over my point about intelligence being about the encoding of regularities of effective interaction of an agent with its environment, but perhaps that is now moot. Now I see you use "information" to mean "regularities o

Re: [agi] general weak ai

2007-03-09 Thread Jef Allbright
On 3/9/07, Pei Wang <[EMAIL PROTECTED]> wrote: On 3/9/07, Jef Allbright <[EMAIL PROTECTED]> wrote: Thanks for the clarification. You can surely call it "high-level functional description", but what I mean is that it is not an ordinary high-level functional description, but a concrete expectati

Re: [agi] general weak ai

2007-03-09 Thread Pei Wang
On 3/9/07, Jef Allbright <[EMAIL PROTECTED]> wrote: > *. I'm not promoting the "toolbox" point of view (nor Society of Mind, > on this issue), but refuting it --- sorry if I didn't make it clear. I clearly understood that, but you only raised the level of the problem, rather than resolving it.

Re: [agi] general weak ai

2007-03-09 Thread Charles D Hixson
Russell Wallace wrote: On 3/7/07, *Ben Goertzel* <[EMAIL PROTECTED] > wrote: A more interesting question to think about, rather than how to represent a story in a formal language, is: How would you convince yourself that your AGI actually understood a s

Re: [agi] general weak ai

2007-03-09 Thread Jef Allbright
On 3/9/07, Jef Allbright <[EMAIL PROTECTED]> wrote: Thanks Pei. Please consider this more a seed of thought than an argument since I recognize I lack the personal resources to argue it to completion. I modestly and humbly offer it as a more consistent way of looking at intentional systems in ge

Re: [agi] general weak ai

2007-03-09 Thread Jef Allbright
On 3/9/07, Pei Wang <[EMAIL PROTECTED]> wrote: Just to clarify a few points: Thanks Pei. Clarification (in the sense of encompassing knowledge, rather than refuting it) is always a step in the right direction. *. I see the intended humor in the parallel sentences, but fail to recognize the s

Re: [agi] general weak ai

2007-03-09 Thread Pei Wang
Just to clarify a few points: *. I see the intended humor in the parallel sentences, but fail to recognize the similarity between these two topics. *. I'm not promoting the "toolbox" point of view (nor Society of Mind, on this issue), but refuting it --- sorry if I didn't make it clear. *. I co

Re: [agi] general weak ai

2007-03-09 Thread Bo Morgan
Right, you would need the Saw to say "hey I can cut that table-leg or that ladder or I could cut your hand". And then you would need the hammer to say similar things that it could do. Then you would need another agent resource to say okay, it doesn't make any sense to cut that table right now

Re: [agi] general weak ai

2007-03-09 Thread Jef Allbright
On 3/9/07, Pei Wang <[EMAIL PROTECTED]> wrote: On 3/9/07, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: > > If I understand Minsky's Society of Mind, the basic idea is to have the tools > be such that you can build your deck by first pointing at the saw and saying > "you do your thing" and the

Re: [agi] general weak ai

2007-03-09 Thread Mike Dougherty
On 3/9/07, Pei Wang <[EMAIL PROTECTED]> wrote: This understanding assumes a "you" who does the "pointing", which is a central controller not assumed in the Society of Mind. To see intelligence as a toolbox, we would have to assume that somehow the saw, hammer, etc. can figure out what they should

Re: [agi] general weak ai

2007-03-09 Thread Pei Wang
On 3/9/07, J. Storrs Hall, PhD. <[EMAIL PROTECTED]> wrote: If I understand Minsky's Society of Mind, the basic idea is to have the tools be such that you can build your deck by first pointing at the saw and saying "you do your thing" and then pointing at the hammer, etc. The tools are then in tu

Re: [agi] general weak ai

2007-03-09 Thread J. Storrs Hall, PhD.
On Thursday 08 March 2007 17:42, Mike Dougherty wrote: > Yeah, if I leave a workbench worth of carpentry tools on a pile of > lumber, I don't expect to have an emergent deck arise... If I understand Minsky's Society of Mind, the basic idea is to have the tools be such that you can build your dec

Re: [agi] general weak ai

2007-03-08 Thread Mike Dougherty
On 3/6/07, Ben Goertzel <[EMAIL PROTECTED]> wrote: > Well what is intelligence if not a collection of tools? One of the hardest Thinking of a mind as a toolkit is misleading. A mind must contain a collection of tools that synergize together so as to give rise to the appropriate high-level eme

Re: [agi] general weak ai

2007-03-07 Thread Russell Wallace
On 3/7/07, Ben Goertzel <[EMAIL PROTECTED]> wrote: A more interesting question to think about, rather than how to represent a story in a formal language, is: How would you convince yourself that your AGI actually understood a story? What kind of question-answers or behaviors would convince you

Re: [agi] general weak ai

2007-03-07 Thread Ben Goertzel
YKY (Yan King Yin) wrote: I agree with Ben and Pei etc on this issue. Narrow AI is VERY different from general AI. It is not at all easy to integrate several narrow AI applications to a single, functioning system. I have never heard of something like this being done, even for two computer

Re: [agi] general weak ai

2007-03-07 Thread YKY (Yan King Yin)
I agree with Ben and Pei etc on this issue. Narrow AI is VERY different from general AI. It is not at all easy to integrate several narrow AI applications to a single, functioning system. I have never heard of something like this being done, even for two computer vision programs. IMO what we n

Re: [agi] general weak ai

2007-03-07 Thread Eugen Leitl
On Wed, Mar 07, 2007 at 09:49:47AM +, Russell Wallace wrote: >I'm going to predict that once you start trying to do serious >simulation, 8 bit integers will turn out to be entirely inadequate and Serious depends utterly on context. If you want to do a superrealtime reality simulator f

Re: [agi] general weak ai

2007-03-07 Thread Bob Mottram
I can confirm from practical experimentation that 8bit integers are too coarse to be able to model the probability density of a three dimensional space using the classic occupancy grid mapping method, but that you can just about get away with using 16bits for some applications. Personally I'm usi

Re: [agi] general weak ai

2007-03-07 Thread Russell Wallace
On 3/7/07, Eugen Leitl <[EMAIL PROTECTED]> wrote: Anything vaguely physical, and doing long-range interactions by iteration of overlapping local neighbourhoods. It's not much of a constraint. Of course, you have to add more data to the volume element, depending on what you want to do. I'm goi

Re: [agi] general weak ai

2007-03-07 Thread Eugen Leitl
On Tue, Mar 06, 2007 at 08:33:13PM +, Russell Wallace wrote: >What simulation algorithms did you have in mind with that data Anything vaguely physical, and doing long-range interactions by iteration of overlapping local neighbourhoods. It's not much of a constraint. Of course, you have to

Re: [agi] general weak ai

2007-03-06 Thread Russell Wallace
On 3/6/07, Eugen Leitl <[EMAIL PROTECTED]> wrote: Consider voxels. Most agents don't have to deal with anything remote at a high-precision level. A nice structure to use with object positions is to use short-integer voxel-relative coordinates. Something like typedef struct voxel_struct {

Re: [agi] general weak ai

2007-03-06 Thread Eugen Leitl
On Tue, Mar 06, 2007 at 02:12:10PM -0500, Ben Goertzel wrote: > For a somewhat recent discussion of issues regarding storing and > querying spatiotemporal objects, see: > > http://citeseer.ist.psu.edu/hadjieleftheriou02efficient.html > > They describe various tree data-structures that are partic

Re: [agi] general weak ai

2007-03-06 Thread J. Storrs Hall, PhD.
On Tuesday 06 March 2007 13:34, Mark Waser wrote: > > Another, simpler example is indexing items via time and space: you need > > to be able to submit a spatial and/or temporal region as a query and find > > items relevant to that region of spacetime. > > A near query where you pin down one entity

Re: [agi] general weak ai

2007-03-06 Thread Ben Goertzel
Mark Waser wrote: Just polynomially expensive, I believe Depends upon whether you're fully connected or not but yeah, yeah . . . . Another, simpler example is indexing items via time and space: you need to be able to submit a spatial and/or temporal region as a query and find items relevant

Re: [agi] general weak ai

2007-03-06 Thread Mark Waser
thing). - Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]> To: Sent: Tuesday, March 06, 2007 12:44 PM Subject: Re: [agi] general weak ai Mark Waser wrote: I like the idea of exploiting the biased statistics of actual changes to the grid in real situations,

Re: [agi] general weak ai

2007-03-06 Thread Ben Goertzel
ginal Message - From: "Ben Goertzel" <[EMAIL PROTECTED]> To: Sent: Tuesday, March 06, 2007 11:42 AM Subject: Re: [agi] general weak ai Bob Mottram wrote: What attracted me about the DP method was that it's less ad-hoc than landmark based systems, but the most attractiv

Re: [agi] general weak ai

2007-03-06 Thread Mark Waser
/updating methods in Novamente, as they are critical to managing large amounts of data in real-time. May I, again, request some details? Thanks! - Original Message - From: "Ben Goertzel" <[EMAIL PROTECTED]> To: Sent: Tuesday, March 06, 2007 11:42 AM Subject: Re: [agi]

Re: [agi] general weak ai

2007-03-06 Thread Ben Goertzel
Bob Mottram wrote: What attracted me about the DP method was that it's less ad-hoc than landmark based systems, but the most attractive feature is of course the linear scaling which is really essential when dealing with large amounts of data. Yeah... In other contexts, we have paid a lot of

Re: [agi] general weak ai

2007-03-06 Thread Bob Mottram
What attracted me about the DP method was that it's less ad-hoc than landmark based systems, but the most attractive feature is of course the linear scaling which is really essential when dealing with large amounts of data. On 06/03/07, Ben Goertzel <[EMAIL PROTECTED]> wrote: Thanks, this stu

Re: [agi] general weak ai

2007-03-06 Thread Ben Goertzel
Bob Mottram wrote: I don't have an overview document as such, but I'm adding stuff into the wiki as needed. Actually there is very little which is unique about my approach. Almost all of the ideas which I'm using originated elsewhere, and many of them have been around for 20 years or so. A

Re: [agi] general weak ai

2007-03-06 Thread Bob Mottram
I don't have an overview document as such, but I'm adding stuff into the wiki as needed. Actually there is very little which is unique about my approach. Almost all of the ideas which I'm using originated elsewhere, and many of them have been around for 20 years or so. All I'm really doing is b

Re: [agi] general weak ai

2007-03-06 Thread Pei Wang
On 3/6/07, Andrew Babian <[EMAIL PROTECTED]> wrote: Well what is intelligence if not a collection of tools? To me, this widely accepted attitude towards AI is a major reason for the lack of progress in AGI in the past decades. A metaphor I have been using is: while computer science and the so

Re: [agi] general weak ai

2007-03-06 Thread Ben Goertzel
Well what is intelligence if not a collection of tools? One of the hardest problems is coming up with such tools that are generalizable across domains, but can't that just be a question of finding more tools that work well in a computer environment, instead of just finding the "ultimate princip

Re: [agi] general weak ai

2007-03-06 Thread Andrew Babian
On Tue, 6 Mar 2007 09:49:47 +, Bob Mottram wrote > Some of the 3D reconstruction stuff being done now is quite impressive (I'm thinking of things like photosynth, monoSLAM and Moravec's stereo vision) and this kind of capability to take raw sensor data and turn it into useful 3D models which m

Re: [agi] general weak ai

2007-03-06 Thread Ben Goertzel
Hi Bob, Is there a document somewhere describing what is unique about your approach? Novamente doesn't involve real robotics right now but the design does involve occupancy grids and "probabilistic simulated robotics", so your ideas are of some practical interest to me... Ben Bob Mottram w

Re: [agi] general weak ai

2007-03-06 Thread Bob Mottram
Some of the 3D reconstruction stuff being done now is quite impressive (I'm thinking of things like photosynth, monoSLAM and Moravec's stereo vision) and this kind of capability to take raw sensor data and turn it into useful 3D models which may then be cogitated upon would be a basic prerequisite

Re: [agi] general weak ai

2007-03-05 Thread Ben Goertzel
Sure, there is a question of how to generally handle knowledge problems, but it may just be that the best way to handle AI is just to individually find the best ways for computers to solve the different problems posed to intelligences. That's actually one of the ideas that I seem to get from

[agi] general weak ai

2007-03-05 Thread Andrew Babian
Listening to a computer vision lecture, I'm impressed out how much is being done now with very domain specific techniques. They can take general pictures from different viewpoints, and recreate a 3-d representation of the world. This is similar to the sort of stereo reconstruction that people do.