Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-12 Thread Eric Burton
I think it's normal for tempers to flare during a depression. This
kind of technology really pays for itself. The only thing that matters
is the code

Eric B

On 10/12/08, Ben Goertzel [EMAIL PROTECTED] wrote:
 No idea, Mentifex ... I haven't filtered out any of your messages (or
 anyone's) ... but sometimes messages get held up at listbox.com by their
 automated spam filters (or for other random reasons) and I take too long to
 log in there and approve them...

 ben


 

 Well, how come my posts aren't getting through? (Going out
 to the list) What do you call that?

 ATM/Mentifex
 --
 http://code.google.com/p/mindforth/




 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 Nothing will ever be attempted if all possible objections must be first
 overcome   - Dr Samuel Johnson



 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=117534816-b15a34
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Ben Goertzel
Brad,



 But, human intelligence is not the only general intelligence we can imagine
 or create.  IMHO, we can get to human-beneficial, non-human-like (but,
 still, human-inspired) general intelligence much quicker if, at least for
 AGI 1.0, we avoid the twin productivity sinks of NLU and embodiment.

 In the end, of course, both or us really have only our opinions.  You can't
 prove the OCP approach, novel though it may be, will, finally, crack the
 elusive NLU problem.  I can't prove it won't.  I, agree, therefore, that we
 should agree to disagree and let history sort things out.



However, the OCP approach has been described publicly in detail, so those
details can be discussed if you care to take the time and effort to
understand them.  It is not realistic to expect Dave or me or others to
repeat the details in the OCP technical documentation in emails on this
list.

OTOH, your proposed alternative has not been articulated anywhere that we
could read, has it?

Your counterargument is not about the particulars of the OCP approach but
just about the general idea of using language or embodiment so as to create
a system that can be taught.

If you want to keep the discussion on this high level of generality, then
why don't you answer this question: **What is your alternative to
teaching??**

I.e., to me, the  main point of having intelligence coevolve with linguistic
and body-control/perception capability in an AGI, is so as to enable humans
to effectively teach that AGI.

If you make an AGI without any ability for linguistic, gestural, etc.
interaction with humans -- how will you teach it???

Or is your idea that it will be **so** smart from the get-go, on its own,
that you won't need to teach it.  That it will gain all the knowledge it
needs by pure unsupervised pattern-recognition from the environment based on
seeking to achieve its goals and learning from experience?

In my view, that is possible but much much much harder.  It's hard enough to
learn to cope with the world when you have a teacher, parent, etc.   This
seems to be a basic point of logic or information theory rather than
anything particular about OCP.

thx
Ben G



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Brad Paulsen

Dave,

Sorry to reply so tardily.  I had to devote some time to other, pressing, 
matters.


First, a general comment.  There seems to be a very interesting approach to 
arguing one's case being taken by some posters on this list in recent days. 
 I believe this approach was evinced most recently, and most baldly, by 
list newbie, Colin Hales.  Apparently, Mr. Hales doesn't believe he is 
responsible for making his arguments clear or backing up his assertions of 
fact.  Rather, it is we who must educate ourselves about the background for 
his particular arguments and assertions so we will be able to prove those 
arguments and assertions to ourselves for him.  Now, THAT's an ego!  This 
approach, of course, begs the question, Why should I care what Mr. Hales 
argues or asserts if he isn't going to take the time and effort needed to 
convince me he's not just some poseur?


Does anyone else find this a tad insulting?  It's tantamount to saying, 
Well, I've made my argument in English.  It may not be perfectly clear to 
you because, to really understand it, you need to be fluent in Esperanto. 
If you're not, that's your problem.  Go learn Esperanto so you can 
understand my fabulous reasoning and be convinced of my argument's 
veracity.  Get real.


I can (and will) ignore Mr. Hales.  But, then, you used this same approach 
in your last post to me when you wrote,  The OCP approach/strategy, both 
in crucial specifics of its parts and particularly in its total synthesis, 
*IS* novel; I recommend a closer re-examination!


I think not. If you really care whether I think OCP's approach is novel, 
you have to convince me, not give me homework.  I'm not arguing OCP's 
position.  You are.  If you think I don't understand OCP well enough, and 
if you think that is important to get me to take your argument seriously, 
then it's up to you to do the heavy lifting.


In this case, though, I'll let you off the hook by gladly conceding the 
point.  I will accept as true the proposition that OCP's approach to NLU is 
completely novel.  Of course, I do this gladly because it makes not a bit 
of difference.


In the first place, I didn't argue that the OCP approach was not novel in 
either its design or implementation.  In fact, I'm sure it is.  I argued 
that trying to solve the artificial intelligence problem by, first, 
solving the NLU problem is not a novel strategy.  We have Mr. Turing to 
thank for it.  It has been tried before.  It has, to date, always failed.


But, as I said, this makes no difference simply because thr fact that the 
OCP strategy is novel doesn't prove it will work.  Indeed, it's not even 
good evidence.  Prior approaches that failed were also once novel.


If the problem of NLU is AI-complete (and this is widely believed to be the 
case), it will not fall to a finite algorithm with space/time complexity 
small enough to make it viable in a real-time AGI.  If NLU turns out to not 
be AI-complete, then we still have fifty years of past failed effort by 
many intelligent, sincere and dedicated people to support the argument that 
it is at least a very difficult problem.


My point has been, and still is, that NLU becomes a necessary condition of 
AGI IFF we define AGI as AGHI.  Many people simply can't conceive of a 
general intelligence that isn't human-like.  This is understandable since 
the only general intelligence we (think we) know something about is human 
intelligence.  In that context, cracking the NLU problem can (although 
still needn't necessarily) be viewed as a prerequisite to cracking the AGI 
problem.


But, human intelligence is not the only general intelligence we can imagine 
or create.  IMHO, we can get to human-beneficial, non-human-like (but, 
still, human-inspired) general intelligence much quicker if, at least for 
AGI 1.0, we avoid the twin productivity sinks of NLU and embodiment.


In the end, of course, both or us really have only our opinions.  You can't 
prove the OCP approach, novel though it may be, will, finally, crack the 
elusive NLU problem.  I can't prove it won't.  I, agree, therefore, that we 
should agree to disagree and let history sort things out.


Cheers,
Brad

David Hart wrote:
On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


So, it has, in fact, been tried before.  It has, in fact, always
failed. Your comments about the quality of Ben's approach are noted.
 Maybe you're right.  But, it's not germane to my argument which is
that those parts of Ben G.'s approach that call for human-level NLU,
and that propose embodiment (or virtual embodiment) as a way to
achieve human-level NLU, have been tried before, many times, and
have always failed.  If Ben G. knows something he's not telling us
then, when he does, I'll consider modifying my views.  But,
remember, my comments were never directed at the OpenCog project or
Ben G. personally.  They were directed at an AGI 

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Brad Paulsen

Ben,

Well, I guess you told me!  I'll just be taking my loosely-coupled 
...bunch of clever narrow-AI widgets... right on out of here.  No need to 
worry about me venturing an opinion here ever again.  I have neither the 
energy nor, apparently, the intellectual ability to respond to a broadside 
like that from the top dog.


It's too bad.  I was just starting to fell at home here.  Sigh.

Cheers (and goodbye),
Brad

Ben Goertzel wrote:


A few points...

1) 
Closely associating embodiment with GOFAI is just flat-out historically 
wrong.  GOFAI refers to a specific class of approaches to AI that wer 
pursued a few decades ago, which were not centered on embodiment as a 
key concept or aspect. 


2)
Embodiment based approaches to AGI certainly have not been extensively 
tried and failed in any serious way, simply because of the primitive 
nature of real and virtual robotic technology.  Even right now, the real 
and virtual robotics tech are not *quite* there to enable us to pursue 
embodiment-based AGI in a really tractable way.  For instance, humanoid 
robots like the Nao cost $20K and have all sorts of serious actuator 
problems ... and virtual world tech is not built to allow fine-grained 
AI control of agent skeletons ... etc.   It would be more accurate to 
say that we're 5-15 years away from a condition where embodiment-based 
AGI can be tried-out without immense time-wastage on making 
not-quite-ready supporting technologies work


3)
I do not think that humanlike NL understanding nor humanlike embodiment 
are in any way necessary for AGI.   I just think that they seem to 
represent the shortest path to getting there, because they represent a 
path that **we understand reasonably well** ... and because AGIs 
following this path will be able to **learn from us** reasonably easily, 
as opposed to AGIs built on fundamentally nonhuman principles


To put it simply, once an AGI can understand human language we can teach 
it stuff.  This will be very helpful to it.  We have a lot of experience 
in teaching agents with humanlike bodies, communicating using human 
language.  Then it can teach us stuff too.   And human language is just 
riddled through and through with metaphors to embodiment, suggesting 
that solving the disambiguation problems in linguistics will be much 
easier for a system with vaguely humanlike embodied experience.


4)
I have articulated a detailed proposal for how to make an AGI using the 
OCP design together with linguistic communication and virtual 
embodiment.  Rather than just a promising-looking assemblage of 
in-development technologies, the proposal is grounded in a coherent 
holistic theory of how minds work.


What I don't see in your counterproposal is any kind of grounding of 
your ideas in a theory of mind.  That is: why should I believe that 
loosely coupling a bunch of clever narrow-AI widgets, as you suggest, is 
going to lead to an AGI capable of adapting to fundamentally new 
situations not envisioned by any of its programmers?   I'm not 
completely ruling out the possiblity that this kind of strategy could 
work, but where's the beef?  I'm not asking for a proof, I'm asking for 
a coherent, detailed argument as to why this kind of approach could lead 
to a generally-intelligent mind.


5)
It sometimes feels to me like the reason so little progress is made 
toward AGI is that the 2000 people on the planet who are passionate 
about it, are moving in 4000 different directions ;-) ...


OpenCog is an attempt to get a substantial number of AGI enthusiasts all 
moving in the same direction, without claiming this is the **only** 
possible workable direction. 

Eventually, supporting technologies will advance enough that some smart 
guy can build an AGI on his own in a year of hacking.  I don't think 
we're at that stage yet -- but I think we're at the stage where a team 
of a couple dozen could do it in 5-10 years.  However, if that level of 
effort can't be systematically summoned (thru gov't grants, industry 
funding, open-source volunteerism or wherever) then maybe AGI won't come 
about till the supporting technologies develop further.  My hope is that 
we can overcome the existing collective-psychology and 
practical-economic obstacles that hold us back from creating AGI 
together, and build a beneficial AGI ASAP ...


-- Ben G








On Mon, Oct 6, 2008 at 2:34 AM, David Hart [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:

So, it has, in fact, been tried before.  It has, in fact, always
failed. Your comments about the quality of Ben's approach are
noted.  Maybe you're right.  But, it's not germane to my
argument which is that those parts of Ben G.'s approach that
call for human-level NLU, and that propose embodiment (or
virtual embodiment) as a way to achieve human-level NLU, have
been tried before, 

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Ben Goertzel
Brad,

Sorry if my response was somehow harsh or inappropriate, it really wasn't
intended as such.  Your contributions to the list are valued.  These last
few weeks have been rather tough for me in my entrepreneurial role (it's not
the best time to be operating a small business, which is what Novamente LLC
is) so I may be in a crankier mood than usual for that reason.

I've been considering taking a break from this email list myself for a few
weeks or months, not because I don't enjoy the discussions, but because
they're taking so much of my time lately!

I guess the essence of my response to you was

***
What I don't see in your counterproposal is any kind of grounding of your
ideas in a theory of mind.  That is: why should I believe that loosely
coupling a bunch of clever narrow-AI widgets, as you suggest, is going to
lead to an AGI capable of adapting to fundamentally new situations not
envisioned by any of its programmers?   I'm not completely ruling out the
possiblity that this kind of strategy could work, but where's the beef?  I'm
not asking for a proof, I'm asking for a coherent, detailed argument as to
why this kind of approach could lead to a generally-intelligent mind.
***

and I don't really see what is offensive about that, but maybe my judgment
is off this week...


-- Ben G


On Sat, Oct 11, 2008 at 11:32 AM, Brad Paulsen [EMAIL PROTECTED]wrote:

 Ben,

 Well, I guess you told me!  I'll just be taking my loosely-coupled
 ...bunch of clever narrow-AI widgets... right on out of here.  No need to
 worry about me venturing an opinion here ever again.  I have neither the
 energy nor, apparently, the intellectual ability to respond to a broadside
 like that from the top dog.

 It's too bad.  I was just starting to fell at home here.  Sigh.

 Cheers (and goodbye),
 Brad

 Ben Goertzel wrote:


 A few points...

 1) Closely associating embodiment with GOFAI is just flat-out historically
 wrong.  GOFAI refers to a specific class of approaches to AI that wer
 pursued a few decades ago, which were not centered on embodiment as a key
 concept or aspect.
 2)
 Embodiment based approaches to AGI certainly have not been extensively
 tried and failed in any serious way, simply because of the primitive nature
 of real and virtual robotic technology.  Even right now, the real and
 virtual robotics tech are not *quite* there to enable us to pursue
 embodiment-based AGI in a really tractable way.  For instance, humanoid
 robots like the Nao cost $20K and have all sorts of serious actuator
 problems ... and virtual world tech is not built to allow fine-grained AI
 control of agent skeletons ... etc.   It would be more accurate to say that
 we're 5-15 years away from a condition where embodiment-based AGI can be
 tried-out without immense time-wastage on making not-quite-ready supporting
 technologies work

 3)
 I do not think that humanlike NL understanding nor humanlike embodiment
 are in any way necessary for AGI.   I just think that they seem to represent
 the shortest path to getting there, because they represent a path that **we
 understand reasonably well** ... and because AGIs following this path will
 be able to **learn from us** reasonably easily, as opposed to AGIs built on
 fundamentally nonhuman principles

 To put it simply, once an AGI can understand human language we can teach
 it stuff.  This will be very helpful to it.  We have a lot of experience in
 teaching agents with humanlike bodies, communicating using human language.
  Then it can teach us stuff too.   And human language is just riddled
 through and through with metaphors to embodiment, suggesting that solving
 the disambiguation problems in linguistics will be much easier for a system
 with vaguely humanlike embodied experience.

 4)
 I have articulated a detailed proposal for how to make an AGI using the
 OCP design together with linguistic communication and virtual embodiment.
  Rather than just a promising-looking assemblage of in-development
 technologies, the proposal is grounded in a coherent holistic theory of how
 minds work.

 What I don't see in your counterproposal is any kind of grounding of your
 ideas in a theory of mind.  That is: why should I believe that loosely
 coupling a bunch of clever narrow-AI widgets, as you suggest, is going to
 lead to an AGI capable of adapting to fundamentally new situations not
 envisioned by any of its programmers?   I'm not completely ruling out the
 possiblity that this kind of strategy could work, but where's the beef?  I'm
 not asking for a proof, I'm asking for a coherent, detailed argument as to
 why this kind of approach could lead to a generally-intelligent mind.

 5)
 It sometimes feels to me like the reason so little progress is made toward
 AGI is that the 2000 people on the planet who are passionate about it, are
 moving in 4000 different directions ;-) ...

 OpenCog is an attempt to get a substantial number of AGI enthusiasts all
 moving in the same direction, without 

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Ben Goertzel
And, just to clarify: the fact that I set up this list and pay $12/month for
its hosting, and deal with the  occasional list-moderation issues that
arise, is not supposed to give my **AI opinions** primacy over anybody
else's on the list, in discussions   I only intervene as moderator when
discussions go off-topic, not to try to push my perspective on people ...
and on the rare occasions when I am speaking as list owner/moderator rather
than as just another AI guy with his own opinions, I try to be very clear
that that is the role I'm adopting..

ben g

On Sat, Oct 11, 2008 at 11:37 AM, Ben Goertzel [EMAIL PROTECTED] wrote:


 Brad,

 Sorry if my response was somehow harsh or inappropriate, it really wasn't
 intended as such.  Your contributions to the list are valued.  These last
 few weeks have been rather tough for me in my entrepreneurial role (it's not
 the best time to be operating a small business, which is what Novamente LLC
 is) so I may be in a crankier mood than usual for that reason.

 I've been considering taking a break from this email list myself for a few
 weeks or months, not because I don't enjoy the discussions, but because
 they're taking so much of my time lately!

 I guess the essence of my response to you was

 ***
 What I don't see in your counterproposal is any kind of grounding of your
 ideas in a theory of mind.  That is: why should I believe that loosely
 coupling a bunch of clever narrow-AI widgets, as you suggest, is going to
 lead to an AGI capable of adapting to fundamentally new situations not
 envisioned by any of its programmers?   I'm not completely ruling out the
 possiblity that this kind of strategy could work, but where's the beef?  I'm
 not asking for a proof, I'm asking for a coherent, detailed argument as to
 why this kind of approach could lead to a generally-intelligent mind.
 ***

 and I don't really see what is offensive about that, but maybe my judgment
 is off this week...


 -- Ben G



 On Sat, Oct 11, 2008 at 11:32 AM, Brad Paulsen [EMAIL PROTECTED]wrote:

 Ben,

 Well, I guess you told me!  I'll just be taking my loosely-coupled
 ...bunch of clever narrow-AI widgets... right on out of here.  No need to
 worry about me venturing an opinion here ever again.  I have neither the
 energy nor, apparently, the intellectual ability to respond to a broadside
 like that from the top dog.

 It's too bad.  I was just starting to fell at home here.  Sigh.

 Cheers (and goodbye),
 Brad

 Ben Goertzel wrote:


 A few points...

 1) Closely associating embodiment with GOFAI is just flat-out
 historically wrong.  GOFAI refers to a specific class of approaches to AI
 that wer pursued a few decades ago, which were not centered on embodiment as
 a key concept or aspect.
 2)
 Embodiment based approaches to AGI certainly have not been extensively
 tried and failed in any serious way, simply because of the primitive nature
 of real and virtual robotic technology.  Even right now, the real and
 virtual robotics tech are not *quite* there to enable us to pursue
 embodiment-based AGI in a really tractable way.  For instance, humanoid
 robots like the Nao cost $20K and have all sorts of serious actuator
 problems ... and virtual world tech is not built to allow fine-grained AI
 control of agent skeletons ... etc.   It would be more accurate to say that
 we're 5-15 years away from a condition where embodiment-based AGI can be
 tried-out without immense time-wastage on making not-quite-ready supporting
 technologies work

 3)
 I do not think that humanlike NL understanding nor humanlike embodiment
 are in any way necessary for AGI.   I just think that they seem to represent
 the shortest path to getting there, because they represent a path that **we
 understand reasonably well** ... and because AGIs following this path will
 be able to **learn from us** reasonably easily, as opposed to AGIs built on
 fundamentally nonhuman principles

 To put it simply, once an AGI can understand human language we can teach
 it stuff.  This will be very helpful to it.  We have a lot of experience in
 teaching agents with humanlike bodies, communicating using human language.
  Then it can teach us stuff too.   And human language is just riddled
 through and through with metaphors to embodiment, suggesting that solving
 the disambiguation problems in linguistics will be much easier for a system
 with vaguely humanlike embodied experience.

 4)
 I have articulated a detailed proposal for how to make an AGI using the
 OCP design together with linguistic communication and virtual embodiment.
  Rather than just a promising-looking assemblage of in-development
 technologies, the proposal is grounded in a coherent holistic theory of how
 minds work.

 What I don't see in your counterproposal is any kind of grounding of your
 ideas in a theory of mind.  That is: why should I believe that loosely
 coupling a bunch of clever narrow-AI widgets, as you suggest, is going to
 lead to an AGI capable of 

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Russell Wallace
On Sat, Oct 11, 2008 at 4:37 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Brad,

 Sorry if my response was somehow harsh or inappropriate, it really wasn't
 intended as such.  Your contributions to the list are valued.  These last
 few weeks have been rather tough for me in my entrepreneurial role (it's not
 the best time to be operating a small business, which is what Novamente LLC
 is) so I may be in a crankier mood than usual for that reason.

I don't think your response was in any way harsh or inappropriate;
honestly, you in a cranky mood are still generally more polite and
tolerant than most of us in a good mood!

Hoping Novamente LLC pulls through the current recession okay.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Jim Bromer
On Sat, Oct 11, 2008 at 11:37 AM, Ben Goertzel [EMAIL PROTECTED] wrote:
 These last
 few weeks have been rather tough for me in my entrepreneurial role (it's not
 the best time to be operating a small business, which is what Novamente LLC
 is) so I may be in a crankier mood than usual for that reason.

The economy is fundamentally ground.


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Mike Tintner
Ben,

I think that's all been extremely clear -and I think you've been very good in 
all your different roles :).  Your efforts have produced a v. good group -and a 
great many thanks for them.
  And, just to clarify: the fact that I set up this list and pay $12/month for 
its hosting, and deal with the  occasional list-moderation issues that arise, 
is not supposed to give my **AI opinions** primacy over anybody else's on the 
list, in discussions   I only intervene as moderator when discussions go 
off-topic, not to try to push my perspective on people ... and on the rare 
occasions when I am speaking as list owner/moderator rather than as just 
another AI guy with his own opinions, I try to be very clear that that is the 
role I'm adopting..

  ben g
   



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Ben Goertzel
If nothing else, for the price of a movie ticket per month, it provides me
many more hours of monthly entertainment ... with a much more interesting
cast of characters than any Hollywood flick ... though the plot development
gets confusing at times ;-)

And who knows, some of these discussions might even lead to something of
value eventually ;-O

ben g

On Sat, Oct 11, 2008 at 2:27 PM, Mike Tintner [EMAIL PROTECTED]wrote:

  Ben,

 I think that's all been extremely clear -and I think you've been very good
 in all your different roles :).  Your efforts have produced a v. good group
 -and a great many thanks for them.

 And, just to clarify: the fact that I set up this list and pay $12/month
 for its hosting, and deal with the  occasional list-moderation issues that
 arise, is not supposed to give my **AI opinions** primacy over anybody
 else's on the list, in discussions   I only intervene as moderator when
 discussions go off-topic, not to try to push my perspective on people ...
 and on the rare occasions when I am speaking as list owner/moderator rather
 than as just another AI guy with his own opinions, I try to be very clear
 that that is the role I'm adopting..

 ben g


 --
   *agi* | Archives https://www.listbox.com/member/archive/303/=now
 https://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttps://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread Ben Goertzel
No idea, Mentifex ... I haven't filtered out any of your messages (or
anyone's) ... but sometimes messages get held up at listbox.com by their
automated spam filters (or for other random reasons) and I take too long to
log in there and approve them...

ben


 

 Well, how come my posts aren't getting through? (Going out
 to the list) What do you call that?

 ATM/Mentifex
 --
 http://code.google.com/p/mindforth/




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

Nothing will ever be attempted if all possible objections must be first
overcome   - Dr Samuel Johnson



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-11 Thread A. T. Murray
Ben Goertzel wrote:

And, just to clarify: the fact that I set up this list and pay $12/month for
its hosting, and deal with the  occasional list-moderation issues that
arise, is not supposed to give my **AI opinions** primacy over anybody
else's on the list, in discussions   I only intervene as moderator when
discussions go off-topic, not to try to push my perspective on people ...
and on the rare occasions when I am speaking as list owner/moderator rather
than as just another AI guy with his own opinions, I try to be very clear
that that is the role I'm adopting..

ben g


Well, how come my posts aren't getting through? (Going out
to the list) What do you call that?

ATM/Mentifex
-- 
http://code.google.com/p/mindforth/ 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-07 Thread Dr. Matthias Heger
The quantum level biases would be more general and more correct as it is the
case 
with quantum physics and classical physics.

The reasons why humans do not have modern physics biases for space and time:
There is no relevant advantage to survive when you have such biases
and probably the costs of necessary resources to obtain any advantage are
far too high
for a biological system.

But with future AGI (not the first level), these objections won't hold.
We don't need AGI do help us with middle level physics. We will need AGI
to make progress in worlds, were our innate intuitions do not hold, namely
nanotechnology, inner cellular biology.
So there would be an advantage for quantum biases and because of this
advantage the quantum biases would probably more often used than non-quantum
biases.

And what about the costs of resources? We could imagine an AGI brain which
has the size of a continent.
Of course not for the first level AGI. But I am sure, that future AGIs will
have quantum biases.

But as Ben said: First we should build AGI with biases we have and
understand.

And the main 3 problems of AGI should be solved first:
How to obtain knowledge, how to represent knowledge and how to use knowledge
to solve different problems in different domains.





Charles Hixson wrote:

I feel that an AI with quantum level biases would be less general. It 
would be drastically handicapped when dealing with the middle level, 
which is where most of living is centered. Certainly an AGI should have 
modules which can more or less directly handle quantum events, but I 
would predict that those would not be as heavily used as the ones that 
deal with the mid level. We (usually) use temperature rather then 
molecule speeds for very good reasons.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread David Hart
On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

 So, it has, in fact, been tried before.  It has, in fact, always failed.
 Your comments about the quality of Ben's approach are noted.  Maybe you're
 right.  But, it's not germane to my argument which is that those parts of
 Ben G.'s approach that call for human-level NLU, and that propose embodiment
 (or virtual embodiment) as a way to achieve human-level NLU, have been tried
 before, many times, and have always failed.  If Ben G. knows something he's
 not telling us then, when he does, I'll consider modifying my views.  But,
 remember, my comments were never directed at the OpenCog project or Ben G.
 personally.  They were directed at an AGI *strategy* not invented by Ben G.
 or OpenCog.


The OCP approach/strategy, both in crucial specifics of its parts and
particularly in its total synthesis, *IS* novel; I recommend a closer
re-examination!

The mere resemblance of some of its parts to past [failed] AI undertakings
is not enough reason to dismiss those parts, IMHO, dislike of embodiment or
NLU or any other aspect that has a GOFAI past lurking in the wings not
withstanding.

OTOH, I will happily agree to disagree on these points to save the AGI list
from going down in flames! ;-)

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread Ben Goertzel
A few points...

1)
Closely associating embodiment with GOFAI is just flat-out historically
wrong.  GOFAI refers to a specific class of approaches to AI that wer
pursued a few decades ago, which were not centered on embodiment as a key
concept or aspect.

2)
Embodiment based approaches to AGI certainly have not been extensively tried
and failed in any serious way, simply because of the primitive nature of
real and virtual robotic technology.  Even right now, the real and virtual
robotics tech are not *quite* there to enable us to pursue embodiment-based
AGI in a really tractable way.  For instance, humanoid robots like the Nao
cost $20K and have all sorts of serious actuator problems ... and virtual
world tech is not built to allow fine-grained AI control of agent skeletons
... etc.   It would be more accurate to say that we're 5-15 years away from
a condition where embodiment-based AGI can be tried-out without immense
time-wastage on making not-quite-ready supporting technologies work

3)
I do not think that humanlike NL understanding nor humanlike embodiment are
in any way necessary for AGI.   I just think that they seem to represent the
shortest path to getting there, because they represent a path that **we
understand reasonably well** ... and because AGIs following this path will
be able to **learn from us** reasonably easily, as opposed to AGIs built on
fundamentally nonhuman principles

To put it simply, once an AGI can understand human language we can teach it
stuff.  This will be very helpful to it.  We have a lot of experience in
teaching agents with humanlike bodies, communicating using human language.
Then it can teach us stuff too.   And human language is just riddled through
and through with metaphors to embodiment, suggesting that solving the
disambiguation problems in linguistics will be much easier for a system with
vaguely humanlike embodied experience.

4)
I have articulated a detailed proposal for how to make an AGI using the OCP
design together with linguistic communication and virtual embodiment.
Rather than just a promising-looking assemblage of in-development
technologies, the proposal is grounded in a coherent holistic theory of how
minds work.

What I don't see in your counterproposal is any kind of grounding of your
ideas in a theory of mind.  That is: why should I believe that loosely
coupling a bunch of clever narrow-AI widgets, as you suggest, is going to
lead to an AGI capable of adapting to fundamentally new situations not
envisioned by any of its programmers?   I'm not completely ruling out the
possiblity that this kind of strategy could work, but where's the beef?  I'm
not asking for a proof, I'm asking for a coherent, detailed argument as to
why this kind of approach could lead to a generally-intelligent mind.

5)
It sometimes feels to me like the reason so little progress is made toward
AGI is that the 2000 people on the planet who are passionate about it, are
moving in 4000 different directions ;-) ...

OpenCog is an attempt to get a substantial number of AGI enthusiasts all
moving in the same direction, without claiming this is the **only** possible
workable direction.

Eventually, supporting technologies will advance enough that some smart guy
can build an AGI on his own in a year of hacking.  I don't think we're at
that stage yet -- but I think we're at the stage where a team of a couple
dozen could do it in 5-10 years.  However, if that level of effort can't be
systematically summoned (thru gov't grants, industry funding, open-source
volunteerism or wherever) then maybe AGI won't come about till the
supporting technologies develop further.  My hope is that we can overcome
the existing collective-psychology and practical-economic obstacles that
hold us back from creating AGI together, and build a beneficial AGI ASAP ...

-- Ben G








On Mon, Oct 6, 2008 at 2:34 AM, David Hart [EMAIL PROTECTED] wrote:

 On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen [EMAIL PROTECTED]wrote:

 So, it has, in fact, been tried before.  It has, in fact, always failed.
 Your comments about the quality of Ben's approach are noted.  Maybe you're
 right.  But, it's not germane to my argument which is that those parts of
 Ben G.'s approach that call for human-level NLU, and that propose embodiment
 (or virtual embodiment) as a way to achieve human-level NLU, have been tried
 before, many times, and have always failed.  If Ben G. knows something he's
 not telling us then, when he does, I'll consider modifying my views.  But,
 remember, my comments were never directed at the OpenCog project or Ben G.
 personally.  They were directed at an AGI *strategy* not invented by Ben G.
 or OpenCog.


 The OCP approach/strategy, both in crucial specifics of its parts and
 particularly in its total synthesis, *IS* novel; I recommend a closer
 re-examination!

 The mere resemblance of some of its parts to past [failed] AI undertakings
 is not enough reason to dismiss those parts, IMHO, dislike of 

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread Mike Tintner
Ben,

V. interesting and helpful to get this pretty clearly stated general position.

However:

To put it simply, once an AGI can understand human language we can teach it 
stuff.

you don't give any prognostic view about the acquisition of language. Mine is - 
in your dreams. Arguably, most AGI-ers still see handling language as a 
largely logical exercise of translating between symbols in dictionaries and 
texts, with perhaps a little grounding. I see language as an extremely 
sophisticated worldpicture, and system for handling that picture, which is 
actually, even if not immediately obvious,  a multimedia exercise that is both 
continuously embodied in our system and embedded in the real world. Not just a 
mode of, but almost the whole of the brain in action, interacting with the 
whole of the world. No AGI system will be literate in an awfully long time. 
Your view?

And:

I think we're at the stage where a team of a couple dozen could do it in 5-10 
years

I repeat - this is outrageous. You don't have the slightest evidence of 
progress - you [the collective you] haven't solved a single problem of general 
intelligence - a single mode of generalising - so you don't have the slightest 
basis for making predictions of progress other than wish-fulfilment, do you? 

  Ben:A few points...

  1)  
  Closely associating embodiment with GOFAI is just flat-out historically 
wrong.  GOFAI refers to a specific class of approaches to AI that wer pursued a 
few decades ago, which were not centered on embodiment as a key concept or 
aspect.  

  2)
  Embodiment based approaches to AGI certainly have not been extensively tried 
and failed in any serious way, simply because of the primitive nature of real 
and virtual robotic technology.  Even right now, the real and virtual robotics 
tech are not *quite* there to enable us to pursue embodiment-based AGI in a 
really tractable way.  For instance, humanoid robots like the Nao cost $20K and 
have all sorts of serious actuator problems ... and virtual world tech is not 
built to allow fine-grained AI control of agent skeletons ... etc.   It would 
be more accurate to say that we're 5-15 years away from a condition where 
embodiment-based AGI can be tried-out without immense time-wastage on making 
not-quite-ready supporting technologies work

  3)
  I do not think that humanlike NL understanding nor humanlike embodiment are 
in any way necessary for AGI.   I just think that they seem to represent the 
shortest path to getting there, because they represent a path that **we 
understand reasonably well** ... and because AGIs following this path will be 
able to **learn from us** reasonably easily, as opposed to AGIs built on 
fundamentally nonhuman principles

  To put it simply, once an AGI can understand human language we can teach it 
stuff.  This will be very helpful to it.  We have a lot of experience in 
teaching agents with humanlike bodies, communicating using human language.  
Then it can teach us stuff too.   And human language is just riddled through 
and through with metaphors to embodiment, suggesting that solving the 
disambiguation problems in linguistics will be much easier for a system with 
vaguely humanlike embodied experience.

  4)
  I have articulated a detailed proposal for how to make an AGI using the OCP 
design together with linguistic communication and virtual embodiment.  Rather 
than just a promising-looking assemblage of in-development technologies, the 
proposal is grounded in a coherent holistic theory of how minds work.

  What I don't see in your counterproposal is any kind of grounding of your 
ideas in a theory of mind.  That is: why should I believe that loosely coupling 
a bunch of clever narrow-AI widgets, as you suggest, is going to lead to an AGI 
capable of adapting to fundamentally new situations not envisioned by any of 
its programmers?   I'm not completely ruling out the possiblity that this kind 
of strategy could work, but where's the beef?  I'm not asking for a proof, I'm 
asking for a coherent, detailed argument as to why this kind of approach could 
lead to a generally-intelligent mind.

  5)
  It sometimes feels to me like the reason so little progress is made toward 
AGI is that the 2000 people on the planet who are passionate about it, are 
moving in 4000 different directions ;-) ... 

  OpenCog is an attempt to get a substantial number of AGI enthusiasts all 
moving in the same direction, without claiming this is the **only** possible 
workable direction.  

  Eventually, supporting technologies will advance enough that some smart guy 
can build an AGI on his own in a year of hacking.  I don't think we're at that 
stage yet -- but I think we're at the stage where a team of a couple dozen 
could do it in 5-10 years.  However, if that level of effort can't be 
systematically summoned (thru gov't grants, industry funding, open-source 
volunteerism or wherever) then maybe AGI won't come about till the supporting 

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread Ben Goertzel

 I think we're at the stage where a team of a couple dozen could do it in
 5-10 years

 I repeat - this is outrageous. You don't have the slightest evidence of
 progress - you [the collective you] haven't solved a single problem of
 general intelligence - a single mode of generalising - so you don't have the
 slightest basis for making predictions of progress other than
 wish-fulfilment, do you?



The argument is complex and technical and doesn't lend itself to convincing
summarization in a few glib, nontechnical paragraphs, sorry...

Read and understand

-- OpenCogPrime wikibook
-- PLN book
-- The Hidden Pattern

and we could have an interesting argument about this  Otherwise, it's
mostly gonna be just surface-level yakking...

ben g



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread Dr. Matthias Heger
Good points. I would like to add a further point:

Human language is a sequence of words which is used to transfer patterns of
one brain into another brain.

When we have an AGI which understands and speaks language, then for the
first time there would be an exchange of patterns between an artificial
brain and a human brain.

So human language is not only useful to teach the AGI some stuff. We also
will have an easy access to the toplevel patterns of the AGI when it speaks
to us. Human language will be useful to understand what is going on in the
AGI. This makes testing easier.

-Matthias

 

Ben G wrote 

 

A few points...

1)  
Closely associating embodiment with GOFAI is just flat-out historically
wrong.  GOFAI refers to a specific class of approaches to AI that wer
pursued a few decades ago, which were not centered on embodiment as a key
concept or aspect.  

2)
Embodiment based approaches to AGI certainly have not been extensively tried
and failed in any serious way, simply because of the primitive nature of
real and virtual robotic technology.  Even right now, the real and virtual
robotics tech are not *quite* there to enable us to pursue embodiment-based
AGI in a really tractable way.  For instance, humanoid robots like the Nao
cost $20K and have all sorts of serious actuator problems ... and virtual
world tech is not built to allow fine-grained AI control of agent skeletons
... etc.   It would be more accurate to say that we're 5-15 years away from
a condition where embodiment-based AGI can be tried-out without immense
time-wastage on making not-quite-ready supporting technologies work

3)
I do not think that humanlike NL understanding nor humanlike embodiment are
in any way necessary for AGI.   I just think that they seem to represent the
shortest path to getting there, because they represent a path that **we
understand reasonably well** ... and because AGIs following this path will
be able to **learn from us** reasonably easily, as opposed to AGIs built on
fundamentally nonhuman principles

To put it simply, once an AGI can understand human language we can teach it
stuff.  This will be very helpful to it.  We have a lot of experience in
teaching agents with humanlike bodies, communicating using human language.
Then it can teach us stuff too.   And human language is just riddled through
and through with metaphors to embodiment, suggesting that solving the
disambiguation problems in linguistics will be much easier for a system with
vaguely humanlike embodied experience.

4)
I have articulated a detailed proposal for how to make an AGI using the OCP
design together with linguistic communication and virtual embodiment.
Rather than just a promising-looking assemblage of in-development
technologies, the proposal is grounded in a coherent holistic theory of how
minds work.

What I don't see in your counterproposal is any kind of grounding of your
ideas in a theory of mind.  That is: why should I believe that loosely
coupling a bunch of clever narrow-AI widgets, as you suggest, is going to
lead to an AGI capable of adapting to fundamentally new situations not
envisioned by any of its programmers?   I'm not completely ruling out the
possiblity that this kind of strategy could work, but where's the beef?  I'm
not asking for a proof, I'm asking for a coherent, detailed argument as to
why this kind of approach could lead to a generally-intelligent mind.

5)
It sometimes feels to me like the reason so little progress is made toward
AGI is that the 2000 people on the planet who are passionate about it, are
moving in 4000 different directions ;-) ... 

OpenCog is an attempt to get a substantial number of AGI enthusiasts all
moving in the same direction, without claiming this is the **only** possible
workable direction.  

Eventually, supporting technologies will advance enough that some smart guy
can build an AGI on his own in a year of hacking.  I don't think we're at
that stage yet -- but I think we're at the stage where a team of a couple
dozen could do it in 5-10 years.  However, if that level of effort can't be
systematically summoned (thru gov't grants, industry funding, open-source
volunteerism or wherever) then maybe AGI won't come about till the
supporting technologies develop further.  My hope is that we can overcome
the existing collective-psychology and practical-economic obstacles that
hold us back from creating AGI together, and build a beneficial AGI ASAP ...

-- Ben G









On Mon, Oct 6, 2008 at 2:34 AM, David Hart [EMAIL PROTECTED] wrote:

On Mon, Oct 6, 2008 at 4:39 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

So, it has, in fact, been tried before.  It has, in fact, always failed.
Your comments about the quality of Ben's approach are noted.  Maybe you're
right.  But, it's not germane to my argument which is that those parts of
Ben G.'s approach that call for human-level NLU, and that propose embodiment
(or virtual embodiment) as a way to achieve human-level NLU, have been tried

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread Charles Hixson

Dr. Matthias Heger wrote:


*Ben G wrote*

**
Well, for the purpose of creating the first human-level AGI, it seems 
important **to** wire in humanlike bias about space and time ... this 
will greatly ease the task of teaching the system to use our language 
and communicate with us effectively...


But I agree that not **all** AGIs should have this inbuilt biasing ... 
for instance an AGI hooked directly to quantum microworld sensors 
could become a kind of quantum mind with a totally different 
intuition for the physical world than we have...



Ok. But then I have again a different understanding of the G in AGI. 
The “quantum mind” should be more general than the human level AGI.


But since the human level AGI is difficult enough, we should build it 
first.


After that, for AGI 2.0, I propose the goal to build a quantum mind. ;-)


I feel that an AI with quantum level biases would be less general. It 
would be drastically handicapped when dealing with the middle level, 
which is where most of living is centered. Certainly an AGI should have 
modules which can more or less directly handle quantum events, but I 
would predict that those would not be as heavily used as the ones that 
deal with the mid level. We (usually) use temperature rather then 
molecule speeds for very good reasons.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-06 Thread David Hart
On Tue, Oct 7, 2008 at 10:43 AM, Charles Hixson
[EMAIL PROTECTED]wrote:

 I feel that an AI with quantum level biases would be less general. It would
 be drastically handicapped when dealing with the middle level, which is
 where most of living is centered. Certainly an AGI should have modules which
 can more or less directly handle quantum events, but I would predict that
 those would not be as heavily used as the ones that deal with the mid level.
 We (usually) use temperature rather then molecule speeds for very good
 reasons.


A single AGI should be able to use different sets of biases and heuristics
in different contexts, and do so simultaneously (i.e. multiple concurrent
areas of hyper-focus, each with its own context, assuming the AGI is running
on powerful enough hardware). This ability is clearly pointed in the
direction of *greater* generality. The PLN book hints that this scenario is
forseen and planned for in the design of PLN; future revisions may well
mention a similar example specifically, inline with Ben's related comments
on this topic.

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread David Hart
On Sun, Oct 5, 2008 at 3:55 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

 More generally, as long as AGI designers and developers insist on
 simulating human intelligence, they will have to deal with the AI-complete
 problem of natural language understanding.  Looking for new approaches to
 this problem, many researches (including prominent members of this list)
 have turned to embodiment (or virtual embodiment) for help.  IMHO, this
 is not a sound tactic because human-like embodiment is, itself, probably an
 AI-complete problem.


Incrementally tackling the AI-complete nature of the natural language
problem is one of the primary reasons for going down the virtual embodiment
path in the first place, to ground the concepts that an AI learns in
non-verbal ways which are similar to (but certainly not identical to) the
ways in which humans and other animals learn (see Piaget, et al). Whether or
not human-like embodiment is an AI-complete problem (we're betting it's not)
is much less clear compared with whether or not natural language
comprehension is an AI-complete problem (research to date indicates that it
is).

Insofar as achieving human-like embodiment and human natural language
 understanding is possible, it is also a very dangerous strategy.  The
 process of understanding human natural language through human-like
 embodiment will, of necessity, lead to the AGHI developing a sense of self.
  After all, that's how we humans got ours (except, of course, the concept
 preceded the language for it).  And look how we turned out.


The development of 'self' in an AI does NOT imply the development of the
same type of ultra-narcissistic self that developed evolutionarily in
humans. The development of something resembling a 'self' in an AI should be
pursued only with careful monitoring, guidance and tuning to prevent the
development of a runaway ultra-narcissistic self.

I realize that an AGHI will not turn on us simply because it understands
 that we're not (like) it (i.e., just because it acquired a sense of self).
  But, it could.  Do we really want to take that chance?  Especially when
 it's not necessary for human-beneficial AGI (AGI without the silent H)?


Embodiment is indeed likely not necessary to reach human-beneficial AGI, but
there's a good line of reasoning that indicates it might be the shortest
path there, managed risks and all. There are also significant risks to be
faced (bio/nano/info) for delaying human-beneficial AGI (e.g., because of
being overly precautious about getting there via human-like AGI).

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Dr. Matthias Heger
Brad Paulson wrote

More generally, as long as AGI designers and developers insist on
simulating human intelligence, they will have to deal with the AI-complete
problem of natural language understanding.  Looking for new approaches to
this problem, many researches (including prominent members of this list)
have turned to embodiment (or virtual embodiment) for help.  


We only know one human level intelligence which works. And this works with 
embodiment. So for this reason, it seems to be an useful approach.

But, of course, if we always use the humans as a guide to develop AGI then we 
will probably obtain similar limitations we observe in humans.

I think an AGI which should be useful for us, must be a very good scientist, 
physicist and mathematician. Is the human kind of learning by experience and 
the human kind of intelligence good for this job? I don't think so. 

Most people on this planet are very poor in these disciplines and I don't think 
that this is only a question of education. There seems to be a very subtle fine 
tuning of genes necessary to change the level of intelligence from a monkey to 
the average human. And there is an even more subtle fine tuning necessary to 
obtain a good mathematician.

This is discouraging for the development of AGI because it shows that human 
level intelligence is not only a question of the right architecture but it 
seems to be more a question of the right fine tuning of some parameters. Even 
if we know that we have the right software architecture, then the real hard 
problems would still arise.

We know that humans can swim. But who would create a swimming machine by 
following the example of the human anatomy?

Similarly, we know that some humans can be scientists. But is it real the best 
way to follow the example of humans to create an artificial scientists? 
Probably not.
If you have the goal to create an artificial scientist in nanotechnology, is it 
a good strategy to let this artificial agent walk through an artificial garden 
with trees and clouds and so on? Is this the best way to make progress in 
nanotechnology, economy and so on? Probably not.

But if we have no idea how to do it better, we have no other chance than to 
follow the example of human intelligence.



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Mike Tintner

Brad:Unfortunately,
as long as the mainstream AGI community continue to hang on to what 
should, by now, be a thoroughly-discredited strategy, we will never (or 
too late) achieve human-beneficial AGI.


Brad,

Perhaps you could give a single example of what you mean by non-human 
intelligence. What sort of faculties for instance? Or problemsolving? How 
will these be fundamentally different?


Maybe you didn't follow my discussion with Ben about this - it turned out 
that Novamente is entirely humanoid. IOW when AGI-ers talk of producing a 
non-human intelligence, what they actually mean in practice is 
cherry-picking those human faculties they like ( think they can mimic)  
ignoring those they don't (or find too difficult). There is no real, 
thought-through conception of a non-human entity at all.Have you thought one 
through? 





---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Brad Paulsen



Dr. Matthias Heger wrote:

Brad Paulson wrote More generally, as long as AGI designers and
developers insist on simulating human intelligence, they will have to
deal with the AI-complete problem of natural language understanding.
Looking for new approaches to this problem, many researches (including
prominent members of this list) have turned to embodiment (or virtual
embodiment) for help. 

We only know one human level intelligence which works. And this works
with embodiment. So for this reason, it seems to be an useful approach.


Dr. Heger,

First, I don't subscribe to the belief that AGI 1.0 need be human-level. 
In fact, my belief is just the opposite: I don't think it should be 
human-level.  And, with all due respect sir, while we may know that 
human-level intelligence works, we have no idea (or very little idea) *how* 
it works.  That, to me, seems to be the more important issue.


If we did have a better idea of how human-level intelligence worked, we'd 
probably have built a human-like AGI by now.  Instead, for all we know, 
human intelligence (and not just the absence or presence or degree thereof 
in any individual human) may be at the bottom end of the scale in the 
universe of all possible intelligences.


You are also, again with all due respect, incorrect in saying that we have 
no other intelligence with which to work.  We have the digital computer. 
It can beat expert humans at the game of chess.  It can beat any human at 
arithmetic -- both in speed and accuracy.  Unlike humans, it remembers 
anything ever stored in its memory and can recall anything in its memory 
with 100% accuracy.  It never shows up to work tired or hung over.  It 
never calls in sick.  On the other hand, what a digital computer doesn't do 
well at present, things like understanding human natural language and being 
creative (in a non-random way), humans do very well.


So, why are we so hell-bent on building an AGI in our own image?  It just 
doesn't make sense when it is manifestly clear that we know how to do 
better.  Why aren't we designing and developing an AGI that leverages the 
strengths, rather than attempts to overcome the weaknesses, of both forms 
of intelligence?


For many tasks that would be deemed intelligent if Turing's imitation game 
had not required natural HUMAN language understanding (or the equivalent 
mimicking thereof), we have already created a non-human intelligence 
superior to human-level intelligence.  It thinks nothing like we do 
(base-2 vs. base-10) yet, for many feats of intelligence only humans used 
to be able to perform, it is a far superior intelligence.  And, please 
note, not only is human-like embodiment *not* required by this 
intelligence, it would be (as it is to the human chess player) a HINDRANCE.



But, of course, if we always use the humans as a guide to develop AGI
then we will probably obtain similar limitations we observe in humans.

I actually don't have a problem with using human-level intelligence as an 
*inspiration* for AGI 1.0.  Digital computers were certainly inspired by 
human-level intelligence.  I do, however, have a problem with using 
human-level intelligence as a *destination* for AGI 1.0.



I think an AGI which should be useful for us, must be a very good
scientist, physicist and mathematician. Is the human kind of learning by
experience and the human kind of intelligence good for this job? I don't
think so.

Most people on this planet are very poor in these disciplines and I
don't think that this is only a question of education. There seems to be
a very subtle fine tuning of genes necessary to change the level of
intelligence from a monkey to the average human. And there is an even
more subtle fine tuning necessary to obtain a good mathematician.



One must be careful with arguments from genetics.  The average chimp will 
beat any human for lunch in a short-term memory contest.  I don't care how 
good the human contestant is at mathematics.  Since judgments about 
intelligence are always relative to the environment in which it is evinced, 
in an environment where those with good short-term memory skills thrive and 
those without barely survive, chimps sure look like the higher intelligence.



This is discouraging for the development of AGI because it shows that
human level intelligence is not only a question of the right
architecture but it seems to be more a question of the right fine tuning
of some parameters. Even if we know that we have the right software
architecture, then the real hard problems would still arise.



Perhaps.  But your first sentence should have read, This is discouraging 
for the development of HUMAN-LEVEL AGI because  It doesn't really 
matter to a non-human AGI.



We know that humans can swim. But who would create a swimming machine by
following the example of the human anatomy?



Yes.  Just as we didn't design airplanes to fly bird-like, even though 
the bird was our best source of inspiration for developing 

Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread David Hart
On Sun, Oct 5, 2008 at 7:29 PM, Brad Paulsen [EMAIL PROTECTED] wrote:

 [snip]  Unfortunately, as long as the mainstream AGI community continue to
 hang on to what should, by now, be a thoroughly-discredited strategy, we
 will never (or too late) achieve human-beneficial AGI.


What a strange rant! How can something that's never before been attempted be
considered a thoroughly-discredited strategy? I.e., creating an AI system
system designed for *general learning and reasoning* (one with AGI goals
clearly thought through to a greater degree than anyone has attempted
previously: http://opencog.org/wiki/OpenCogPrime:Roadmap ) and then
carefully and deliberately progressing that AI through Piagetan-inspired
inspired stages of learning and development, all the while continuing to
methodically improve the AI with ever more sophisticated software
development, cognitive algorithm advances (e.g. planned improvements to PLN
and MOSES/Reduct), reality modeling and testing iterations, homeostatic
system tuning, intelligence testing and metrics, etc.

One might well have said in early 1903 that the concept of powered flight
was a thoroughly-discredited strategy. It's just as silly to say that now
[about Goertzel's approach to AGI] as it would have been to say it then
[about the Wright brothers' approach to flight].

-dave



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: AW: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Brad Paulsen



Dr. Matthias Heger wrote:


Brad Paulson wrote Fortunately, as I argued above, we do have other
choices.  We don't have to settle for human-like. 

I do not see so far other choices. Chess is AI but not AGI.


Yes, I agree but IFF by AGI you mean human-level AGI.  As you point out
below, a lot has to do with how we define AGI.


Your idea of an incremental roadmap to human-level AGI is interesting,
but I think everyone who tries to build a human-level AGI already makes
incremental experiments and first steps with non-human-level AGI in
order to make a proof of concept. I think, Ben Goertzel has done some
experiments with artificial dogs and other non-human agents.

So it is only a matter of definition what we mean by AGI 1.0 I think, we
now have already AGI 0.0.x and the goal is AGI 1.0 which can do the same
as a human.

Why this goal? An AGI which resembles functionally (not necessarily in
algorithmic details) a human has the great advantage that everyone can
communicate with this agent.

Yes, but everyone can communicate with baby AGI right now using a 
highly-restricted subset of human natural language.  The system I'm working 
on now uses the simple, declarative sentence, the propositional (if/then) 
rule statement, and simple query as its NL interface.  The declarations of 
fact and propositional rules are upgraded, internally, to FOL+.  AI-agent 
to AI-agent communication is done entirely in FOL+.  I had considered using 
Prolog for the human interface but the non-success of Prolog in a community 
(computer programmers) already expert at communicating with computers using 
formal languages caused me to drop back to the, more difficult, but not 
impossible, semi-formal NL approach.


We don't need to crack the entire NLU problem to be able to communicate 
with AGI's in a semi-formalized version of natural human language.  Sure, 
it can get tedious. just as talking to a two-year old human child can get 
tedious (unless it's your kid, of course: then, it's fascinating!).  Does 
it impress people at demos?  The average person?  Yep, it pretty much 
does.  Even though it's far from finished at this time.  Skeptical AGHI 
designers and developers?  Not so much.  But, I'm working on that!


The question I'm raising in this thread is more one of priorities and 
allocation of scarce resources.  Engineers and scientists comprise only 
about 1% of the world's population.  Is human-level NLU worth the resources 
it has consumed, and will continue to consume, in the pre-AGI-1.0 stage? 
Even if we eventually succeed, would it be worth the enormous cost? 
Wouldn't it be wiser to go with the strengths of both humans and 
computers during this (or any other) stage of AGI development?


Getting digital computers to understand natural human language at 
human-level has proven itself to be an AI-complete problem.  Do we need 
another fifty years of failure to achieve NLU using computers to finally 
accept this?  Developing NLU for AGI 1.0 is not playing to the strengths of 
the digital computer or of humans (who only take about three years to gain 
a basic grasp of language and continue to improve that grasp as they age 
into adulthood).


Computers calculate better than do humans.  Humans are natural language 
experts.  IMHO, saying that the first version of AGI should include 
enabling computers to understand human language like humans is just about 
as silly as saying the first version of AGI should include enabling humans 
to be able to calculate like computers.


IMHO, embodiment is another loosing proposition where AGI 1.0 is concerned. 
 For all we know, embodiment won't work until we can produce an artificial 
bowel movement.  It's the, To think like Einstein, you have to stink like 
Einstein. theory.  Well, I don't want AGI 1.0 to think like Einstein.  I 
want it to think BETTER than Einstein (and without the odoriferous 
side-effect, thank you very much).



It would be interesting for me which set of abilities you want to have
in AGI 1.0.

Well, we (humanity) need, first, to decide *why* we want to create another 
form of intelligence.  And the answer has to be something other than 
because we can.  What benefits do we propose should issue to humanity 
from such an expensive pursuit?  In other words, what does 
human-beneficial AGI really mean?


Only once we have ironed out our differences in that regard (or, at least, 
have produced a compromise on a list of core abilities), should we start 
thinking about an implementation.  In general, though, when it comes to 
implementation, we need to start small and play to our strengths.


For example, people who want to build AGHI tend to look down their noses at 
classic, narrow-AI successes such as expert (production) systems (Ben G. is 
NOT in this group, BTW).  This has prevented these folks from even 
considering using this technology to achieve AGI 1.0.  I *am* (proudly and 
loudly) using this technology to build bootstrapping intelligent agents 
for AGI.


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-05 Thread Brad Paulsen



David Hart wrote:
On Sun, Oct 5, 2008 at 7:29 PM, Brad Paulsen [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


[snip]  Unfortunately, as long as the mainstream AGI community
continue to hang on to what should, by now, be a
thoroughly-discredited strategy, we will never (or too late) achieve
human-beneficial AGI.


What a strange rant! How can something that's never before been 
attempted be considered a thoroughly-discredited strategy? I.e., 
creating an AI system system designed for *general learning and 
reasoning* (one with AGI goals clearly thought through to a greater 
degree than anyone has attempted previously: 
http://opencog.org/wiki/OpenCogPrime:Roadmap ) and then carefully and 
deliberately progressing that AI through Piagetan-inspired inspired 
stages of learning and development, all the while continuing to 
methodically improve the AI with ever more sophisticated software 
development, cognitive algorithm advances (e.g. planned improvements to 
PLN and MOSES/Reduct), reality modeling and testing iterations, 
homeostatic system tuning, intelligence testing and metrics, etc.


Please: strange rant?  I've been known to employ inflammatory rhetoric in 
the past when my blood was boiling and I have always been sorry I did it. 
I have an opinion.  You don't think it agrees with your opinion.  That's 
called a disagreement amongst peers.  Not a strange rant.


First, you have taken my statement out of context.  I was NOT referring to 
Ben G.'s overall approach to AGI.  His *concept* of AGI, if you will.  I 
was referring to the (not his) strategy of making human-level NLU a 
prerequisite for AGI (this is not a strategy pioneered by Ben G.).


Human-level AGI is an AI-complete problem.  So, this strategy makes the 
goal of getting to AGI 1.0 dependent on solving an AI-complete problem. 
The strategy of using embodiment to help crack the NLU problem (also not 
pioneered by Ben G.) may very well be another AI-complete problem (indeed, 
it may contain a whole collection of AI-complete problems).  I don't think 
that's a very good plan.  You, apparently, do.  I can point to past 
failures, you can only point to future possibilities.  Still, neither of us 
is going to convince the other we are right.  End of story.  Time will tell 
(and this e-mail list is conveniently archived for later reference).


Second, ...never before been attempted...?  Simply not true.  I was in 
high school when this stuff was first attempted.  I personally remember 
reading about it.  I haven't succumbed to Alzheimer's yet.  By the time I 
got to college, most of the early predictions had already been shown to 
have been way too optimistic.  But, since eyewitness testimony is not 
usually good enough, I give you this quote from the Wikipedia article on 
Strong AI (which is what searching Wikipedia for AGI will get you):


The first generation of AI researchers were convinced that [AGI] was 
possible and that it would exist in just a few decades.  As AI pioneer 
Herbert Simon wrote in 1965: machines will be capable, within twenty 
years, of doing any work a man can do.[10]  Their predictions were the 
inspiration for Stanley Kubrick and Arthur C. Clarke's character HAL 9000, 
who accurately embodied what AI researchers believed they could create by 
the year 2001.  Of note is the fact that AI pioneer Marvin Minsky was a 
consultant[11] on the project of making HAL 9000 as realistic as possible 
according to the consensus predictions of the time, having himself said on 
the subject in 1967, Within a generation...the problem of creating 
'artificial intelligence' will substantially be solved.[12]

(http://en.wikipedia.org/wiki/Artificial_general_intelligence)

So, it has, in fact, been tried before.  It has, in fact, always failed. 
Your comments about the quality of Ben's approach are noted.  Maybe you're 
right.  But, it's not germane to my argument which is that those parts of 
Ben G.'s approach that call for human-level NLU, and that propose 
embodiment (or virtual embodiment) as a way to achieve human-level NLU, 
have been tried before, many times, and have always failed.  If Ben G. 
knows something he's not telling us then, when he does, I'll consider 
modifying my views.  But, remember, my comments were never directed at the 
OpenCog project or Ben G. personally.  They were directed at an AGI 
*strategy* not invented by Ben G. or OpenCog.




One might well have said in early 1903 that the concept of powered 
flight was a thoroughly-discredited strategy. It's just as silly to 
say that now [about Goertzel's approach to AGI] as it would have been to 
say it then [about the Wright brothers' approach to flight].




What?  No it's not just as silly.

Let me see if I have this straight.  You would have me believe that because 
One might as well have said in early 1903 the concept of powered flight 
was a 'thoroughly-discredited' strategy.' my objection to a 2008 AGI 
strategy is just as silly.  Nice try, 

AW: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Dr. Matthias Heger
From my points 1. and 2. it should be clear that I was not talking about a
distributed AGI which is in NO place. The AGI you mean consists of several
parts which are in different places. But this is already the case with the
human body. The only difference is, that the parts of the distributed AGI
can be placed several kilometers from each other. But this is only a
quantitative and not a qualitative point.

Now to my statement of an useful representation of space and time for AGI.
We know, that our intuitive understanding of space and time works very well
in our life. But the ultimate goal of AGI is that it can solve problems
which are very difficult for us. If we give an AGI bias of a model of space
and time which is not state of the art of the knowledge we have from
physics, then we give AGI a certain limitation which we ourselves suffer
from and which is not necessary for an AGI.
This point has nothing to do with the question whether the AGI is
distributed or not.
I mentioned this point because your question has relations to the more
fundamental question whether and which bias we should give AGI for the
representation of space and time.


Ursprüngliche Nachricht-
Von: Mike Tintner [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 4. Oktober 2008 14:13
An: agi@v2.listbox.com
Betreff: Re: [agi] I Can't Be In Two Places At Once.

Matthias: I think it is extremely important, that we give an AGI no bias 
about
space and time as we seem to have.

Well, I ( possibly Ben) have been talking about an entity that is in many 
places at once - not in NO place. I have no idea how you would swing that - 
other than what we already have - machines that are information-processors 
with no sense of identity at all.Do you? 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Dr. Matthias Heger
Stan wrote:

Seems hard to imagine information processing without identity. 
Intelligence is about invoking methods.  Methods are created because 
they are expected to create a result.  The result is the value - the 
value that allows them to be selected from many possible choices.


Identity can be distributed in space. My conscious model of myself is not
located at a single point in space. I identify myself with my body. I do not
even have to know that I have a brain. But my body is distributed in space.
It is not a point. This is also the case with my conscious model of myself
(= model of my body).

Furthermore if you think more from a computer scientist point of view: Even
your brain is distributed in space and is not at a single place. Your brain
consists of a huge amount of processors where each processor is at a
different place. So I see no new problem with distributed AGI at all.

Stan wrote

Is it the time and space bias that is the issue?  If so, what is the 
bias that humans have which machines shouldn't?


I don't know whether it is bias for space and time representation or it
comes from the bias within our learning algorithms. But all human create a
model of their environment with the law that a physical object has a
certain position at a certain time. Also we think intuitively that the
distance to a point does not depend on the velocity towards this point.
These were two examples which are completely wrong as we know from modern
physics. Why is it so important for an AGI to  know this?
Because AGI should help us with the progress in technology. And the most
promising open field in technology are within the nanoworld and the
macrocosm. It should be useful if an AGI has an intuitive understanding of
the laws in these worlds.
We should avoid to rebuild our own weakness within AGI.




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


Re: AW: [agi] I Can't Be In Two Places At Once.

2008-10-04 Thread Brad Paulsen
Dr. Heger,

Point #3 is brilliantly stated.  I couldn't have expressed it better.  And
I know this because I've been trying to do so, in slightly broader terms,
for months on this list.  Insofar as providing an AGI with a human-biased
sense of space and time is required to create a human-like AGI (what I
prefer to call AG*H*I), I agree it is a mistake.

More generally, as long as AGI designers and developers insist on
simulating human intelligence, they will have to deal with the AI-complete
problem of natural language understanding.  Looking for new approaches to
this problem, many researches (including prominent members of this list)
have turned to embodiment (or virtual embodiment) for help.  IMHO, this
is not a sound tactic because human-like embodiment is, itself, probably an
AI-complete problem.

Insofar as achieving human-like embodiment and human natural language
understanding is possible, it is also a very dangerous strategy.  The
process of understanding human natural language through human-like
embodiment will, of necessity, lead to the AGHI developing a sense of self.
 After all, that's how we humans got ours (except, of course, the concept
preceded the language for it).  And look how we turned out.

I realize that an AGHI will not turn on us simply because it understands
that we're not (like) it (i.e., just because it acquired a sense of self).
  But, it could.  Do we really want to take that chance?  Especially when
it's not necessary for human-beneficial AGI (AGI without the silent H)?

Cheers,
Brad


Dr. Matthias Heger wrote:
 1. We feel ourselves not exactly at a single point in space. Instead, we
 identify ourselves with our body which consist of several parts and which
 are already at different points in space. Your eye is not at the same place
 as your hand.
 I think this is a proof that a distributed AGI will not need  to have a
 complete different conscious state for a model of its position in space than
 we already have.
 
 2.But to a certain degree you are of course right that we have a map of our
 environment and we know our position (which is not a point because of 1) in
 this map. In the brain of a rat there are neurons which each represent a
 position of the environment. Researches could predict the position of the
 rat only by looking into the rat's brain.
 
 3. I think it is extremely important, that we give an AGI no bias about
 space and time as we seem to have. Our intuitive understanding of space and
 time is useful for our life on earth but it is completely wrong as we know
 from theory of relativity and quantum physics. 
 
 -Matthias Heger
 
 
 
 -Ursprüngliche Nachricht-
 Von: Mike Tintner [mailto:[EMAIL PROTECTED] 
 Gesendet: Samstag, 4. Oktober 2008 02:44
 An: agi@v2.listbox.com
 Betreff: [agi] I Can't Be In Two Places At Once.
 
 The foundation of the human mind and system is that we can only be in one 
 place at once, and can only be directly, fully conscious of that place. Our 
 world picture,  which we and, I think, AI/AGI tend to take for granted, is 
 an extraordinary triumph over that limitation   - our ability to conceive of
 
 the earth and universe around us, and of societies around us, projecting 
 ourselves outward in space, and forward and backward in time. All animals 
 are similarly based in the here and now.
 
 But,if only in principle, networked computers [or robots] offer the 
 possibility for a conscious entity to be distributed and in several places 
 at once, seeing and interacting with the world simultaneously from many 
 POV's.
 
 Has anyone thought about how this would change the nature of identity and 
 intelligence? 
 
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 
 
 
 ---
 agi
 Archives: https://www.listbox.com/member/archive/303/=now
 RSS Feed: https://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription: https://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com
 


---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com


AW: [agi] I Can't Be In Two Places At Once.

2008-10-03 Thread Dr. Matthias Heger
1. We feel ourselves not exactly at a single point in space. Instead, we
identify ourselves with our body which consist of several parts and which
are already at different points in space. Your eye is not at the same place
as your hand.
I think this is a proof that a distributed AGI will not need  to have a
complete different conscious state for a model of its position in space than
we already have.

2.But to a certain degree you are of course right that we have a map of our
environment and we know our position (which is not a point because of 1) in
this map. In the brain of a rat there are neurons which each represent a
position of the environment. Researches could predict the position of the
rat only by looking into the rat's brain.

3. I think it is extremely important, that we give an AGI no bias about
space and time as we seem to have. Our intuitive understanding of space and
time is useful for our life on earth but it is completely wrong as we know
from theory of relativity and quantum physics. 

-Matthias Heger



-Ursprüngliche Nachricht-
Von: Mike Tintner [mailto:[EMAIL PROTECTED] 
Gesendet: Samstag, 4. Oktober 2008 02:44
An: agi@v2.listbox.com
Betreff: [agi] I Can't Be In Two Places At Once.

The foundation of the human mind and system is that we can only be in one 
place at once, and can only be directly, fully conscious of that place. Our 
world picture,  which we and, I think, AI/AGI tend to take for granted, is 
an extraordinary triumph over that limitation   - our ability to conceive of

the earth and universe around us, and of societies around us, projecting 
ourselves outward in space, and forward and backward in time. All animals 
are similarly based in the here and now.

But,if only in principle, networked computers [or robots] offer the 
possibility for a conscious entity to be distributed and in several places 
at once, seeing and interacting with the world simultaneously from many 
POV's.

Has anyone thought about how this would change the nature of identity and 
intelligence? 




---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription:
https://www.listbox.com/member/?;
Powered by Listbox: http://www.listbox.com



---
agi
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=8660244id_secret=114414975-3c8e69
Powered by Listbox: http://www.listbox.com