Re: [agi] Microsoft Launches Singularity

2008-03-28 Thread Mike Tintner
Steve,

You raise huge issues. I broadly agree with the direction you're going with 
your multilevelled approach to physically implementing verbal commands. 
However, I'm quite sure there is still more than you think - including a whole 
level of image schemas - useful here to think of the analogy of geometry as a 
whole supportive level of science's upper level of words and other symbols.

I seriously recommend, in fact insist that you have got to get into 
Lakoff-Johnson,  and Rizzolatti-Gallese-Iacoboni  the mirror neurons crowd. 
These guys are working together  doing some of the hottest research at the mo. 
Try Chap 8 of Mark Johnson, The Meaning of the Body - and more. Basically, 
experiments show the brain does start to instantiate and process physical 
verbal commands and ideas on a pre-motor level all the time  - and indeed has 
to, if you think about it. If someone says come with me to the supermarket, 
your brain has to process that on a motor level for you to immediately reply: 
I can't, I've got a weak ankle. 

Actually, come to think of it, verbal porn is probably a truly great area to 
explore in terms of multilevelled, and v. physical processing here!

I haven't really thought about physical/robotic instantiation of commands much, 
except that the starting point will normally be that the body and its limbs 
typically offer something like a 180-360 degree spectrum of freedom of movement 
on any given plane, and then I guess, as you indicate, the brain-body will 
plump first for the easiest most direct line of physical approach to a target, 
and then adjust accordingly to obstacles. Clearly it will have certain movement 
sets/skills - so even if you are trying to dance around, say, freely, 
improvisationally, you tend to fall into certain familiar kinds of moves and 
find it difficult to branch out in new directions. - As soon as one starts to 
think about these areas, it seems to me, the need for what I would call a loose 
geoiconography (as opposed to precise geometry/ geography) of thought - i.e. 
a system of mental image schemas - becomes apparent.
  - Original Message - 
  From: Stephen Reed 
  To: agi@v2.listbox.com 
  Sent: Friday, March 28, 2008 4:30 AM
  Subject: Re: [agi] Microsoft Launches Singularity




  - Original Message 
  From: Mike Tintner [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Thursday, March 27, 2008 5:30:12 PM
  Subject: Re: [agi] Microsoft Launches Singularity


  Steve,

  Some odd thoughts in reply. Thanks BTW for article.

  1. You don't seem to get what's implicit in the main point - you can't 
reliably work out the sense of an enormous number of words by any kind of word 
lookup whatsoever. How do you actually work out how to handle the object - 
the slimy, slippery twisted ropey thing-y, or whatever? By looking at it. By 
looking at images of it - either directly or by entertaining them mentally - 
not consulting any kind of dictionary or word definitions at all. By imagining 
what parts of the object to grip, and how to configure your hands to grip it.


  Steve: Sorry that I missed that.  But your clarifying issue is quite 
interesting.  Let me try to tease appart your scenario and explain how the 
envisioned Texai system would process the command handle the object.  I 
assume that you agree that an AGI designed to our mutual satisfaction should in 
principle be able to process that particular command with at least the same 
competence as a human.  So the issue for me is to explain in brief how Texai 
might do it.   

  First I assume that Texai has a body of commsense knowledge about, and skills 
applicable to,  the kinds of objects that can be handled.  If not, then there 
is a knowledge acquisition phase, and skill acquistion phase, that must be 
completed beforehand.  

  Second, I assume that the linquistic concepts are expressed internally by the 
system as symbolic terms.  Many terms, for example objects that can be handled, 
 are grounded to the real world by an abstraction hierarchy.  Descending down 
this hierarchy, objects are represented less and less as symbols in logical 
statements, and more and more as clustered feature vectors, and perhaps, at the 
lowest levels, as no internal state at all - just sensors and actuators in 
contact with the real world.

  Thirdly, I distinguish between the understanding the command handle the 
object and generating the behavior required to perform the command.  I think 
that you are conflating these two notions to make the scenario more difficult 
that it otherwise would be.  Perhaps as you know, Texai is a hierarchical 
control system.  I expect that skills will be present to handle various kinds 
of objects, so for me the issue is to determine the correct skill to invoke in 
order to perform the given command.  As I explained in my previous post, Fluid 
Construction Grammar does not determine semantics by word lookup, rather it 
looks up constructions, which might be words

Re: [agi] Microsoft Launches Singularity

2008-03-28 Thread Stephen Reed
Mike,

I have Lakoff  Johnson Metaphors We Live By.  And I'll order the other 
titles you recommend.
-Steve

Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Friday, March 28, 2008 8:07:26 AM
Subject: Re: [agi] Microsoft Launches Singularity

 DIV {
MARGIN:0px;}
Steve,
 
You raise huge issues. I broadly agree with the 
direction you're going with your multilevelled approach to physically 
implementing verbal commands. However, I'm quite sure there is still more than 
you think - including a whole level of image schemas - useful here to think of 
the analogy of geometry as a whole supportive level of science's upper level of 
words and other symbols.
 
I seriously recommend, in fact insist that you have 
got to get into Lakoff-Johnson,  and Rizzolatti-Gallese-Iacoboni  the 
mirror neurons crowd. These guys are working together  doing some of the 
hottest research at the mo. Try Chap 8 of Mark Johnson, The Meaning of the Body 
- and more. Basically, experiments show the brain does start to instantiate and 
process physical verbal commands and ideas on a pre-motor level all the 
time  - and indeed has to, if you think about it. If someone says come 
with me to the supermarket, your brain has to process that on a motor level 
for 
you to immediately reply: I can't, I've got a weak 
ankle. 
 
Actually, come to think of it, verbal porn is 
probably a truly great area to explore in terms of multilevelled, and v. 
physical processing here!
 
I haven't really thought about physical/robotic 
instantiation of commands much, except that the starting point will normally be 
that the body and its limbs typically offer something like a 180-360 degree 
spectrum of freedom of movement on any given plane, and then I guess, as you 
indicate, the brain-body will plump first for the easiest most direct line of 
physical approach to a target, and then adjust accordingly to obstacles. 
Clearly 
it will have certain movement sets/skills - so even if you are trying to dance 
around, say, freely, improvisationally, you tend to fall into certain familiar 
kinds of moves and find it difficult to branch out in new directions. - As 
soon as one starts to think about these areas, it seems to me, the need for 
what 
I would call a loose geoiconography (as opposed to precise geometry/ 
geography) of thought - i.e. a system of mental image schemas - becomes 
apparent.








  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Mike Tintner
Charles:  I don't think a General Intelligence could be built entirely out 
of

narrow AI components, but it might well be a relatively trivial add-on.
Just consider how much of human intelligence is demonstrably narrow AI
(well, not artificial, but you know what I mean).  Object recognition,
e.g.  Then start trying to guess how much of the part that we can't
prove a classification for is likely to be a narrow intelligence
component.  In my estimation (without factual backing) less than 0.001
of our intelligence is General Intellignece, possibly much less.




John:  I agree that it may be 1%. 




Oh boy, does this strike me as absurd. Don't have time for the theory right 
now, but just had to vent. Percentage estimates strike me as a bit silly, 
but if you want to aim for one, why not look at both your paragraphs, word 
by word. Don't  think might relatively etc. Now which of those words 
can only be applied to a single type of activity, rather than an open-ended 
set of activities? Which cannot be instantiated in an open-ended if not 
infinite set of ways? Which is not a very valuable if not key tool of a 
General Intelligence, that can adapt to solve problems across domains? 
Language IOW is the central (but not essential) instrument of human general 
intelligence - and I can't think offhand of a single world that is not a 
tool for generalising across domains, including Charles H. and John G..


In fact, every tool you guys use - logic, maths etc. - is similarly general 
and functions in similar ways. The above strikes me as a 99% failure to 
understand the nature of general intelligence. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Microsoft Launches Singularity

2008-03-27 Thread John G. Rose
 From: Mike Tintner [mailto:[EMAIL PROTECTED]
 
 Charles:  I don't think a General Intelligence could be built entirely
 out
 of
  narrow AI components, but it might well be a relatively trivial add-
 on.
  Just consider how much of human intelligence is demonstrably narrow
 AI
  (well, not artificial, but you know what I mean).  Object
 recognition,
  e.g.  Then start trying to guess how much of the part that we can't
  prove a classification for is likely to be a narrow intelligence
  component.  In my estimation (without factual backing) less than
 0.001
  of our intelligence is General Intellignece, possibly much less.
  
 
 John:  I agree that it may be 1%. 
 
 
 Oh boy, does this strike me as absurd. Don't have time for the theory
 right
 now, but just had to vent. Percentage estimates strike me as a bit
 silly,
 but if you want to aim for one, why not look at both your paragraphs,
 word
 by word. Don't  think might relatively etc. Now which of those
 words
 can only be applied to a single type of activity, rather than an open-
 ended
 set of activities? Which cannot be instantiated in an open-ended if not
 infinite set of ways? Which is not a very valuable if not key tool of a
 General Intelligence, that can adapt to solve problems across domains?
 Language IOW is the central (but not essential) instrument of human
 general
 intelligence - and I can't think offhand of a single world that is not a
 tool for generalising across domains, including Charles H. and John
 G..
 
 In fact, every tool you guys use - logic, maths etc. - is similarly
 general
 and functions in similar ways. The above strikes me as a 99% failure to
 understand the nature of general intelligence.
 

Mike you are 100% potentially right with a margin of error of 110%. LOL!

Seriously Mike how do YOU indicate approximations? And how are you
differentiating general and specific? And declaring relative absolutes and
convenient infinitudes... I'm trying to understand your argument.

John

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Mike Tintner

John,

I'm developing this argument more fully elsewhere, so I'll just give a 
partial gist. What I'm saying - and I stand to be corrected - is that I 
suspect that literally no one in AI and AGI (and perhaps philosophy) present 
or past understands the nature of the tools they are using.


All the tools - all the sign systems currently used - especially language - 
are actually general-purpose - AS USED BY THE HUMAN BRAIN.


The whole point of just about every word in language is that it constitutes 
a general, open brief which can be instantiated in any one of an infinite 
set of ways.


So if I tell you to handle an object, or a piece of business, like say 
removing a chair from the house - that word handle is open-ended and 
gives you vast freedom within certain parameters as to how to apply your 
hand(s) to that object. Your hands can be applied to move a given box, for 
example, in a vast if not infinite range of positions and trajectories. Such 
a general, open concept is of the essence of general intelligence, because 
it means that you are immediately ready to adapt to new kinds of situation - 
if your normal ways of handling boxes are blocked, you are ready to seek out 
or improvise some strange new contorted two-finger hand position to pick up 
the box - which also count as handling. (And you will have actually done a 
lot of this).


So what is the meaning of handle? Well, to be precise, it doesn't have 
a/one meaning, and isn't meant to - it has a range of possible 
meanings/references, and you can choose which is most convenient in the 
circumstances.


The same principles apply to just about every word in language and every 
unit of logic and mathematics.


But - and correct me - I don't think anyone in AI/AGI is using language or 
any logico-mathematical systems in this general, open-ended way - the way 
they are actually meant to be used - and the very foundation of General 
Intelligence.


Language and the other systems are always used by AGI in specific ways to 
have specific meanings. YKY, typically, wanted a language for his system 
which had precise meanings. Even Ben, I suspect, may only employ words in an 
open way, in that their meanings can be changed with experience - but at 
any given point their meanings will have to be specific.


To be capable of generalising as the human brain does - and of true AGI - 
you have to have a brain that simultaneously processes on at least two if 
not three levels, with two/three different sign systems - including both 
general and particular ones.




John: Charles:  I don't think a General Intelligence could be built 
entirely

out
of
 narrow AI components, but it might well be a relatively trivial add-
on.
 Just consider how much of human intelligence is demonstrably narrow
AI
 (well, not artificial, but you know what I mean).  Object
recognition,
 e.g.  Then start trying to guess how much of the part that we can't
 prove a classification for is likely to be a narrow intelligence
 component.  In my estimation (without factual backing) less than
0.001
 of our intelligence is General Intellignece, possibly much less.
 

John:  I agree that it may be 1%. 


Oh boy, does this strike me as absurd. Don't have time for the theory
right
now, but just had to vent. Percentage estimates strike me as a bit
silly,
but if you want to aim for one, why not look at both your paragraphs,
word
by word. Don't  think might relatively etc. Now which of those
words
can only be applied to a single type of activity, rather than an open-
ended
set of activities? Which cannot be instantiated in an open-ended if not
infinite set of ways? Which is not a very valuable if not key tool of a
General Intelligence, that can adapt to solve problems across domains?
Language IOW is the central (but not essential) instrument of human
general
intelligence - and I can't think offhand of a single world that is not a
tool for generalising across domains, including Charles H. and John
G..

In fact, every tool you guys use - logic, maths etc. - is similarly
general
and functions in similar ways. The above strikes me as a 99% failure to
understand the nature of general intelligence.



Mike you are 100% potentially right with a margin of error of 110%. LOL!

Seriously Mike how do YOU indicate approximations? And how are you
differentiating general and specific? And declaring relative absolutes and
convenient infinitudes... I'm trying to understand your argument.

John

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?;

Powered by Listbox: http://www.listbox.com



--
No virus found in this incoming message.
Checked by AVG.
Version: 7.5.519 / Virus Database: 269.22.1/1345 - Release Date: 3/26/2008 
6:50 PM






---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS 

Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Ben Goertzel
  So if I tell you to handle an object, or a piece of business, like say
  removing a chair from the house - that word handle is open-ended and
  gives you vast freedom within certain parameters as to how to apply your
  hand(s) to that object. Your hands can be applied to move a given box, for
  example, in a vast if not infinite range of positions and trajectories. Such
  a general, open concept is of the essence of general intelligence, because
  it means that you are immediately ready to adapt to new kinds of situation -
  if your normal ways of handling boxes are blocked, you are ready to seek out
  or improvise some strange new contorted two-finger hand position to pick up
  the box - which also count as handling. (And you will have actually done a
  lot of this).

  So what is the meaning of handle? Well, to be precise, it doesn't have
  a/one meaning, and isn't meant to - it has a range of possible
  meanings/references, and you can choose which is most convenient in the
  circumstances.

Actually I'd make a stronger statement than that.

It's not just that we can CHOOSE the meanings of concepts from a fixed menu
of possibilities ... we CREATE the meanings of concepts as we use them ...
this is how and why concept-meanings continually change over time in
individual minds and in cultures...

This is parallel to how we create episodic memories as we re-live them,
rather than retrieving them as if from a database...

These creation processes do however seem to be realizable in digital
computer systems, based on my theoretical understanding ... though none
of us have done it yet, it's certainly loads of work given current software
tools...

Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Mike Tintner
Ben:It's not just that we can CHOOSE the meanings of concepts from a fixed 
menu

of possibilities ... we CREATE the meanings of concepts as we use them ...
this is how and why concept-meanings continually change over time in
individual minds and in cultures...

Yes. Good point. Generality/open-endedness of sign systems and creativity 
are intertwined - Creativity here being used in the most general sense to 
cover everything from the everyday kind, such as improvising new 
bag-carrying hand positions, to the Edisonian kind. 



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Stephen Reed
Mike,

An interesting paper on the meanings of words is I don't believe in word 
senses by Adam Kilgarriff.  He concludes:

Following a description of the conflict between WSD [Word Sense Disambiguation] 
and lexicological research, I examined the concept, ‘word sense’. It was not 
found to be sufficiently well defined to be a workable basic unit of meaning. I 
then presented an account of word meaning in which ‘word sense’ or ‘lexical 
unit’ is not a basic unit. Rather, the basic units are occurrences of the word 
in context (operationalised as corpus citations). In the simplest case, corpus 
citations fall into one or more distinct clusters and each of these clusters, 
if large enough and distinct enough from other clusters, forms a distinct word 
sense. But many or most cases are not simple, and even for an apparently 
straightforward common noun with physical objects as denotation, handbag, there 
are a significant number of aberrant citations. The interactions between a 
word’s uses and its senses were explored in some detail. The analysis also 
charted the potential
 for lexical creativity. The implication for WSD is that word senses are only 
ever defined relative to a set of interests. The set of senses defined by a 
dictionary may or may not match the set that is relevant for an NLP [Natural 
Language Processing] application. The scientific study of language should not 
include word senses as objects in its ontology. Where ‘word senses’ have a role 
to play in a scientific vocabulary, they are to be construed as abstractions 
over clusters of word usages.


Accordingly, I am attracted to Fluid Construction Grammar in my own work 
because the minimal constituent in that grammar is the construction, which in 
some cases can be a word, but often is not.

You gave as an example:

So if I tell you to handle an object, or a piece of business, like say 
removing a chair from the house - that word handle is open-ended and 
gives you vast freedom within certain parameters as to how to apply your 
hand(s) to that object. 

 
The utterance Texai, handle removing a chair from the house would, in my 
system, be processed as an imperative construction, parsing out these discourse 
referring objects:
Texai - the software agent commanded to perform the handling actionhandling 
action - specifically, the action in which responsibility for accomplishing the 
removing action is accepted
removing action - the type of removing intended by the author of the command
house - the location of the actionchair - the item to be removedimperative 
situation - the enclosing utterance situation in which these other objects are 
related
The Texai system, as envisioned by me to operate, would recognize this command 
as a parametrized task, then either (1) find an existing skill module capable 
of performing the task, or (2) composing a sequence of more primitive skills 
whose combination is capable of performing the task.  

As you point out, the task may be performed directly by the agent, or 
indirectly by managing the effort of some other agent.  The author of the 
command does not care which alternative is chosen by the commanded agent - 
hence the use of the word handle in this construction.

-Steve

Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 27, 2008 11:04:08 AM
Subject: Re: [agi] Microsoft Launches Singularity

 John,

I'm developing this argument more fully elsewhere, so I'll just give a 
partial gist. What I'm saying - and I stand to be corrected - is that I 
suspect that literally no one in AI and AGI (and perhaps philosophy) present 
or past understands the nature of the tools they are using.

All the tools - all the sign systems currently used - especially language - 
are actually general-purpose - AS USED BY THE HUMAN BRAIN.

The whole point of just about every word in language is that it constitutes 
a general, open brief which can be instantiated in any one of an infinite 
set of ways.

So if I tell you to handle an object, or a piece of business, like say 
removing a chair from the house - that word handle is open-ended and 
gives you vast freedom within certain parameters as to how to apply your 
hand(s) to that object. Your hands can be applied to move a given box, for 
example, in a vast if not infinite range of positions and trajectories. Such 
a general, open concept is of the essence of general intelligence, because 
it means that you are immediately ready to adapt to new kinds of situation - 
if your normal ways of handling boxes are blocked, you are ready to seek out 
or improvise some strange new contorted two-finger hand position to pick up 
the box - which also count as handling. (And you will have actually done a 
lot of this).

So what is the meaning of handle? Well, to be precise

Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Mike Tintner
:

a.. Texai - the software agent commanded to perform the handling action
b.. handling action - specifically, the action in which responsibility for 
accomplishing the removing action is accepted

c.. removing action - the type of removing intended by the author of the 
command

d.. house - the location of the action
e.. chair - the item to be removed
f.. imperative situation - the enclosing utterance situation in which these 
other objects are related

  The Texai system, as envisioned by me to operate, would recognize this 
command as a parametrized task, then either (1) find an existing skill module 
capable of performing the task, or (2) composing a sequence of more primitive 
skills whose combination is capable of performing the task.  

  As you point out, the task may be performed directly by the agent, or 
indirectly by managing the effort of some other agent.  The author of the 
command does not care which alternative is chosen by the commanded agent - 
hence the use of the word handle in this construction.

  -Steve

  Stephen L. Reed


  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860



  - Original Message 
  From: Mike Tintner [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Thursday, March 27, 2008 11:04:08 AM
  Subject: Re: [agi] Microsoft Launches Singularity

  John,

  I'm developing this argument more fully elsewhere, so I'll just give a 
  partial gist. What I'm saying - and I stand to be corrected - is that I 
  suspect that literally no one in AI and AGI (and perhaps philosophy) present 
  or past understands the nature of the tools they are using.

  All the tools - all the sign systems currently used - especially language - 
  are actually general-purpose - AS USED BY THE HUMAN BRAIN.

  The whole point of just about every word in language is that it constitutes 
  a general, open brief which can be instantiated in any one of an infinite 
  set of ways.

  So if I tell you to handle an object, or a piece of business, like say 
  removing a chair from the house - that word handle is open-ended and 
  gives you vast freedom within certain parameters as to how to apply your 
  hand(s) to that object. Your hands can be applied to move a given box, for 
  example, in a vast if not infinite range of positions and trajectories. Such 
  a general, open concept is of the essence of general intelligence, because 
  it means that you are immediately ready to adapt to new kinds of situation - 
  if your normal ways of handling boxes are blocked, you are ready to seek out 
  or improvise some strange new contorted two-finger hand position to pick up 
  the box - which also count as handling. (And you will have actually done a 
  lot of this).

  So what is the meaning of handle? Well, to be precise, it doesn't have 
  a/one meaning, and isn't meant to - it has a range of possible 
  meanings/references, and you can choose which is most convenient in the 
  circumstances.

  The same principles apply to just about every word in language and every 
  unit of logic and mathematics.

  But - and correct me - I don't think anyone in AI/AGI is using language or 
  any logico-mathematical systems in this general, open-ended way - the way 
  they are actually meant to be used - and the very foundation of General 
  Intelligence.

  Language and the other systems are always used by AGI in specific ways to 
  have specific meanings. YKY, typically, wanted a language for his system 
  which had precise meanings. Even Ben, I suspect, may only employ words in an 
  open way, in that their meanings can be changed with experience - but at 
  any given point their meanings will have to be specific.

  To be capable of generalising as the human brain does - and of true AGI - 
  you have to have a brain that simultaneously processes on at least two if 
  not three levels, with two/three different sign systems - including both 
  general and particular ones.



  John: Charles:  I don't think a General Intelligence could be built 
  entirely
   out
   of
narrow AI components, but it might well be a relatively trivial add-
   on.
Just consider how much of human intelligence is demonstrably narrow
   AI
(well, not artificial, but you know what I mean).  Object
   recognition,
e.g.  Then start trying to guess how much of the part that we can't
prove a classification for is likely to be a narrow intelligence
component.  In my estimation (without factual backing) less than
   0.001
of our intelligence is General Intellignece, possibly much less.

   
   John:  I agree that it may be 1%. 
   
  
   Oh boy, does this strike me as absurd. Don't have time for the theory
   right
   now, but just had to vent. Percentage estimates strike me as a bit
   silly,
   but if you want to aim for one, why not look at both your paragraphs,
   word
   by word. Don't  think might

Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Ben Goertzel
It's true, a word sense is not a crisp thing like a part-of-speech
... it's more of a cluster among usage-instances...

Yet, this kind of fuzzy, cluster-type category does play an important
role in cognition, no?

ben g

2008/3/27 Stephen Reed [EMAIL PROTECTED]:

 Mike,

 An interesting paper on the meanings of words is I don't believe in word
 senses by Adam Kilgarriff.  He concludes:

 Following a description of the conflict between WSD [Word Sense
 Disambiguation] and lexicological research, I examined the concept, 'word
 sense'. It was not found to be sufficiently well defined to be a workable
 basic unit of meaning. I then presented an account of word meaning in which
 'word sense' or 'lexical unit' is not a basic unit. Rather, the basic units
 are occurrences of the word in context (operationalised as corpus
 citations). In the simplest case, corpus citations fall into one or more
 distinct clusters and each of these clusters, if large enough and distinct
 enough from other clusters, forms a distinct word sense. But many or most
 cases are not simple, and even for an apparently straightforward common noun
 with physical objects as denotation, handbag, there are a significant number
 of aberrant citations. The interactions between a word's uses and its senses
 were explored in some detail. The analysis also charted the potential for
 lexical creativity. The implication for WSD is that word senses are only
 ever defined relative to a set of interests. The set of senses defined by a
 dictionary may or may not match the set that is relevant for an NLP [Natural
 Language Processing] application. The scientific study of language should
 not include word senses as objects in its ontology. Where 'word senses' have
 a role to play in a scientific vocabulary, they are to be construed as
 abstractions over clusters of word usages.

 Accordingly, I am attracted to Fluid Construction Grammar in my own work
 because the minimal constituent in that grammar is the construction, which
 in some cases can be a word, but often is not.

 You gave as an example:


 So if I tell you to handle an object, or a piece of business, like say
 removing a chair from the house - that word handle is open-ended and
 gives you vast freedom within certain parameters as to how to apply your
 hand(s) to that object.
  The utterance Texai, handle removing a chair from the house would, in my
 system, be processed as an imperative construction, parsing out these
 discourse referring objects:

 Texai - the software agent commanded to perform the handling action
 handling action - specifically, the action in which responsibility for
 accomplishing the removing action is accepted
 removing action - the type of removing intended by the author of the command
 house - the location of the action
 chair - the item to be removed
 imperative situation - the enclosing utterance situation in which these
 other objects are related
 The Texai system, as envisioned by me to operate, would recognize this
 command as a parametrized task, then either (1) find an existing skill
 module capable of performing the task, or (2) composing a sequence of more
 primitive skills whose combination is capable of performing the task.

 As you point out, the task may be performed directly by the agent, or
 indirectly by managing the effort of some other agent.  The author of the
 command does not care which alternative is chosen by the commanded agent -
 hence the use of the word handle in this construction.

 -Steve

 Stephen L. Reed


 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860



 - Original Message 
 From: Mike Tintner [EMAIL PROTECTED]
 To: agi@v2.listbox.com
 Sent: Thursday, March 27, 2008 11:04:08 AM
 Subject: Re: [agi] Microsoft Launches Singularity

  John,

 I'm developing this argument more fully elsewhere, so I'll just give a
 partial gist. What I'm saying - and I stand to be corrected - is that I
 suspect that literally no one in AI and AGI (and perhaps philosophy) present
 or past understands the nature of the tools they are using.

 All the tools - all the sign systems currently used - especially language -
 are actually general-purpose - AS USED BY THE HUMAN BRAIN.

 The whole point of just about every word in language is that it constitutes
 a general, open brief which can be instantiated in any one of an infinite
 set of ways.

 So if I tell you to handle an object, or a piece of business, like say
 removing a chair from the house - that word handle is open-ended and
 gives you vast freedom within certain parameters as to how to apply your
 hand(s) to that object. Your hands can be applied to move a given box, for
 example, in a vast if not infinite range of positions and trajectories. Such
 a general, open concept is of the essence of general intelligence, because
 it means that you are immediately ready to adapt to new kinds of situation -
 if your

Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Vladimir Nesov
[Warning: A random blurb on the word theme].

Words and similar things are marvelous high-level training tools. They
provide a uniform interface that allows to access high-level concepts
through low-level standard input. They allow to perform supervised
training without special 'label signals'. For action learning, word
can be associated with a certain kind of actions, and then this word
can be used to evoke this kind of action in novel situations, forcing
this class of actions on those situations. Words are simple high-level
handles that can be used to move around and direct complex processes
associated with them.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Bob Mottram
On 27/03/2008, Mike Tintner [EMAIL PROTECTED] wrote:

 3. While philosophically, intellectually, most people dealing with this
 area may expect words to have precise meanings,  they know practically and
 intuitively that this is impossible and work on the basis that words can
 have different meanings according to who uses them - and that they
 themselves keep shifting their usage of words. Philosophers, for example may
 argue philosophically that words can and should have precise meanings and be
 treated as true or false, but know in practice that pretty well all the
 major words/concepts in philosophy,  like
 mind/consciousness/determinism - have multiple, indeed
 endless definitions. Or just think about AGI'ers and intelligence.



It seems to me that the linguistics are just a secondary phenomena intended
to express and riding on top of a deeper underlying dynamic consisting of
prelinguistic concepts, motor acts, imagery and so on.  There may be a many
to one mapping between the linguistic expression and the underlying models,
hence the belief that individuals may talk with precise meanings.  In a
sense the meanings may be precise when translated into the underlying
models, but the process of interpretation may have multiple paths and be
quite ambiguous.

Trying to understand language completely in isolation as a kind of
statistical word game is probably going to fail in my view.  Language itself
is just a tool or mode of expression for things which may be intimately
bound up with our embodiment and the way we perceive and act in the world.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Stephen Reed
Ben, 
I would agree with an even stronger version of your statement: Treating word 
senses as fuzzy, cluster type categories in the context of usage-instances is 
the only cognitively plausible method for AGI to comprehend and produce them.  
-Steve
 
Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Ben Goertzel [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 27, 2008 5:37:40 PM
Subject: Re: [agi] Microsoft Launches Singularity

It's true, a word sense is not a crisp thing like a part-of-speech
... it's more of a cluster among usage-instances...

Yet, this kind of fuzzy, cluster-type category does play an important
role in cognition, no?

ben g









  

Never miss a thing.  Make Yahoo your home page. 
http://www.yahoo.com/r/hs

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-27 Thread Stephen Reed

- Original Message 
From: Mike Tintner [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Thursday, March 27, 2008 5:30:12 PM
Subject: Re: [agi] Microsoft Launches Singularity

 DIV {
MARGIN:0px;}
Steve,
 
Some odd thoughts in reply. Thanks BTW for 
article.
 
1. You don't seem to get what's implicit in the main point - you can't reliably 
work out the sense of an enormous number of words by any kind of word lookup 
whatsoever. How do you actually work out how to handle the object - the 
slimy, slippery twisted ropey thing-y, or whatever? By looking at it. By 
looking at images of it - either directly or by entertaining them mentally - 
not consulting any kind of dictionary or word definitions at all. By imagining 
what parts of the object to grip, and how to configure your hands to grip it.

Steve: Sorry that I missed that.  But your clarifying issue is quite 
interesting.  Let me try to tease appart your scenario and explain how the 
envisioned Texai system would process the command handle the object.  I 
assume that you agree that an AGI designed to our mutual satisfaction should in 
principle be able to process that particular command with at least the same 
competence as a human.  So the issue for me is to explain in brief how Texai 
might do it.   

First I assume that Texai has a body of commsense knowledge about, and skills 
applicable to,  the kinds of objects that can be handled.  If not, then there 
is a knowledge acquisition phase, and skill acquistion phase, that must be 
completed beforehand.  

Second, I assume that the linquistic concepts are expressed internally by the 
system as symbolic terms.  Many terms, for example objects that can be handled, 
 are grounded to the real world by an abstraction hierarchy.  Descending down 
this hierarchy, objects are represented less and less as symbols in logical 
statements, and more and more as clustered feature vectors, and perhaps, at the 
lowest levels, as no internal state at all - just sensors and actuators in 
contact with the real world.

Thirdly, I distinguish between the understanding the command handle the 
object and generating the behavior required to perform the command.  I think 
that you are conflating these two notions to make the scenario more difficult 
that it otherwise would be.  Perhaps as you know, Texai is a hierarchical 
control system.  I expect that skills will be present to handle various kinds 
of objects, so for me the issue is to determine the correct skill to invoke in 
order to perform the given command.  As I explained in my previous post, Fluid 
Construction Grammar does not determine semantics by word lookup, rather it 
looks up constructions, which might be words, but often are not.  

Given these assumptions of mine, your scenario suggests that the object to be 
handled is one for which the system has no previous skill, or for which the 
existing skill cannot be recognized as applicable to the given object.  Because 
I now building a bootstrap dialog system, that is motivated entirely by the 
need to process novel situations, I am tempted to say that the system should 
simply ask the user to teach it how to handle the novel object, or to ask if an 
existing skill can be applied to the given object.  However, lets move beyond 
this approach, and I'll explain how the system uses existing perception and 
planning skills to handle the given object. 

By way of simplification, I'll assume your intent when asking the system to 
handle the object means to pick it up with some physical actuator.  And I'll 
preface my explanation of this step by stating without proof that this task is 
analogous to those already solved by state-of-the-art, urban, driverless cars, 
e.g. drive yourself to location X, where the driverless car has never been to 
X.  Rather than a futile attempt to explain all cases that come to mind, I'll 
discuss a couple to give a flavor my approach.  

Case 1 The system can sense that the novel object is not dangerous and cannot 
be easily destroyed by its actuators.  Then I propose that the first strategy 
tried should be to pick it up in the most direct fashion, and compensate in 
subsequent attempts for failure modes that resulted from from the earlier 
attempts.  This is like the pole balancing task that can be accomplished by 
connectionist methods and no symbolic planning.

Case 2 The system senses that the actions to pick up the object are not subject 
to experimentation, but must be performed correctly on the first attempt.  For 
this task, the system must observe all the object state that it can to remove 
uncertainty.  It must create a symbolic model of the object and its dynamics at 
the right level of abstraction, and perform planning using symbolic 
representions of its possible actions in order to create a trajectory that 
satisfies the command to handle the object.  Then it must execute the plan, 
repairing the plan as needed as problem state evolves that was not planned in 
advance

RE: [agi] Microsoft Launches Singularity

2008-03-27 Thread John G. Rose
 From: Mike Tintner [mailto:[EMAIL PROTECTED]
 
 I'm developing this argument more fully elsewhere, so I'll just give a
 partial gist. What I'm saying - and I stand to be corrected - is that I
 suspect that literally no one in AI and AGI (and perhaps philosophy)
 present
 or past understands the nature of the tools they are using.
 
 All the tools - all the sign systems currently used - especially
 language -
 are actually general-purpose - AS USED BY THE HUMAN BRAIN.
 
 The whole point of just about every word in language is that it
 constitutes
 a general, open brief which can be instantiated in any one of an
 infinite
 set of ways.
 
 So if I tell you to handle an object, or a piece of business, like say
 removing a chair from the house - that word handle is open-ended and
 gives you vast freedom within certain parameters as to how to apply your
 hand(s) to that object. Your hands can be applied to move a given box,
 for
 example, in a vast if not infinite range of positions and trajectories.
 Such
 a general, open concept is of the essence of general intelligence,
 because
 it means that you are immediately ready to adapt to new kinds of
 situation -
 if your normal ways of handling boxes are blocked, you are ready to seek
 out
 or improvise some strange new contorted two-finger hand position to pick
 up
 the box - which also count as handling. (And you will have actually
 done a
 lot of this).
 
 So what is the meaning of handle? Well, to be precise, it doesn't
 have
 a/one meaning, and isn't meant to - it has a range of possible
 meanings/references, and you can choose which is most convenient in the
 circumstances.
 
 The same principles apply to just about every word in language and every
 unit of logic and mathematics.
 
 But - and correct me - I don't think anyone in AI/AGI is using language
 or
 any logico-mathematical systems in this general, open-ended way - the
 way
 they are actually meant to be used - and the very foundation of General
 Intelligence.
 
 Language and the other systems are always used by AGI in specific ways
 to
 have specific meanings. YKY, typically, wanted a language for his system
 which had precise meanings. Even Ben, I suspect, may only employ words
 in an
 open way, in that their meanings can be changed with experience - but
 at
 any given point their meanings will have to be specific.
 
 To be capable of generalising as the human brain does - and of true AGI
 -
 you have to have a brain that simultaneously processes on at least two
 if
 not three levels, with two/three different sign systems - including both
 general and particular ones.
 
 

OK I think I see what you are saying had to think about this for bit. Kind
of interesting that language and to a certain extent mathematics oftimes has
handles which refer to generalities. And in order to maintain a common
understanding, say if two agents were communicating with language there
needs to be one or more moving foci within a fuzzy perimeter of specifics
included within a handle's generality. There is probably two to three and
maybe more of these levels or foci yes.

Definitely English language has this property. Math on the other hand is
different. You can control it more. English language operates within a
region of constraints, it is strongly tied to human communication, which is
convenient since we are human but when you think about it - if you change
and manipulate this 2 or 3 level/foci dynamic you can come up with some
really good and interesting forms of literary expressiveness. And this is
done often with experimental writing. And if one is good at it he can
communicate extremely effectively.

Now relating this to general intelligence if including a creativity modus
operandi within the domain of this foci set may involve some form of general
intelligence for goal attainment. Sure.. I agree that this can be an
operational form of general intelligence, I suppose, but am not sure if this
is more of a communicatory operational protocol that is state driven by
information flow...IOW more of a reactionary thing...

John



---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: Mark Launches Singularity :-) WAS Re: [agi] Microsoft Launches Singularity

2008-03-26 Thread Chris Petersen
Mentifex called; it wants its ASCII diagrams back.

-Chris

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-26 Thread Charles D Hixson

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]
My take on this is completely different.

When I say Narrow AI I am specifically referring to something that is
so limited that it has virtually no chance of becoming a general
intelligence.  There is more to general intelligence than just throwing
a bunch of Narrow AI ideas into a pot and hoping for the best. If it
were, we would have had AGI long before now.



It's an opinion that AGI could not be built out of a conglomeration of
narrow-AI subcomponents. Also there are many things that COULD be built with
narrow-AI that we have not even scratched the surface of due to a number of
different limitations so saying that we would have achieved AGI long ago is
an exaggeration.
  
I don't think a General Intelligence could be built entirely out of 
narrow AI components, but it might well be a relatively trivial add-on.  
Just consider how much of human intelligence is demonstrably narrow AI 
(well, not artificial, but you know what I mean).  Object recognition, 
e.g.  Then start trying to guess how much of the part that we can't 
prove a classification for is likely to be a narrow intelligence 
component.  In my estimation (without factual backing) less than 0.001 
of our intelligence is General Intellignece, possibly much less.
 
  

Consciousness and self-awareness are things that come as part of the AGI
package.  If the system is too simple to have/do these things, it will
not be general enough to equal the human mind.




I feel that general intelligence may not require consciousness and
self-awareness. I am not sure of this and may prove myself wrong. To equal
the human mind you need these things of course and to satisfy the sci-fi
fantasy world's appetite for intelligent computers you would need to
incorporate these as well.

John
  
I'm not sure of the distinction that you are making between 
consciousness and self-awareness, but even most complex narrow-AI 
applications require at least rudimentary self awareness.  In fact, one 
could argue that all object oriented programming with inheritance has 
rudimentary self awareness (called this in many languages, but in 
others called self).  This may be too rudimentary, but it's my feeling 
that it's an actual model(implementation?) of what the concept of self 
has evolved from.


As to an AGI not being conscious I'd need to see a definition of 
your terms, because otherwise I've *got* to presume that we have 
radically different definitions.  To me an AGI would not only need to be 
aware of itself, but also to be aware of aspects of it's environment 
that it could effect changes in,  And of the difference between them, 
though that might well be learned.  (Zen:  Who is the master who makes 
the grass green?, and a few other koans when solved imply that in 
humans the distinction between internal and external is a learned 
response.)  Perhaps the diagnostic characteristic of an AGI is that it 
CAN learn that kind of thing.  Perhaps not, too.  I can imagine a narrow 
AI that was designed to plug into different bodies, and in each case 
learn the distinction between itself and the environment before 
proceeding with its assignment.  I'm not sure it's possible, but I can 
imagine it.


OTOH, if we take my arguments in the preceding paragraph too seriously, 
then medical patients that are locked in would be considered not 
intelligent.  This is clearly incorrect.  Effectively they aren't 
intelligent, but that's because of a mechanical breakdown in the 
sensory/motor area, and that clearly isn't what we mean when we talk 
about intelligence.  But examples of recovered/recovering patients seem 
to imply that they weren't exactly either intelligent or conscious while 
they were locked-in.  (I'm going solely by reports in the popular 
science press...so don't take this too seriously.)  It appears as if 
when external sensations are cut-off, that the mind estivates...at least 
after awhile.  Presumably different patients had different causes, and 
thence at least slightly different effects, but that's my first-cut 
guess at what's happening.  OTOH, the sensory/motor channel doesn't need 
to be particularly well functioning.  Look at Stephan Hawking.


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: Mark Launches Singularity :-) WAS Re: [agi] Microsoft Launches Singularity

2008-03-26 Thread Mark Waser
:-)  Now that is *funny* -- and polite -- especially after said ASCII diagrams 
got so badly mangled (and I got flamed for using HTML e-mail -- which is even 
more humorous ;-)
  - Original Message - 
  From: Chris Petersen 
  To: agi@v2.listbox.com 
  Sent: Wednesday, March 26, 2008 3:52 AM
  Subject: Re: Mark Launches Singularity :-) WAS Re: [agi] Microsoft Launches 
Singularity


  Mentifex called; it wants its ASCII diagrams back.

  -Chris



--
agi | Archives  | Modify Your Subscription  

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Microsoft Launches Singularity

2008-03-26 Thread John G. Rose
 From: Charles D Hixson [mailto:[EMAIL PROTECTED]
 
 I don't think a General Intelligence could be built entirely out of
 narrow AI components, but it might well be a relatively trivial add-on.
 Just consider how much of human intelligence is demonstrably narrow AI
 (well, not artificial, but you know what I mean).  Object recognition,
 e.g.  Then start trying to guess how much of the part that we can't
 prove a classification for is likely to be a narrow intelligence
 component.  In my estimation (without factual backing) less than 0.001
 of our intelligence is General Intellignece, possibly much less.
 

I agree that it may be 1%. Also from what I've read with brain atrophy
cases is that a typical human brain may be able to function relatively
normally with 10% of its mass if atrophy is applied over time.

 I'm not sure of the distinction that you are making between
 consciousness and self-awareness, but even most complex narrow-AI
 applications require at least rudimentary self awareness.  In fact, one
 could argue that all object oriented programming with inheritance has
 rudimentary self awareness (called this in many languages, but in
 others called self).  This may be too rudimentary, but it's my feeling
 that it's an actual model(implementation?) of what the concept of self
 has evolved from.
 

Consciousness and awareness are two functions that I was separating out. The
programming language this and self are particular to class instances
right and can be at the root of the hierarchy tree but there are many, many
this's in a large OO application. A collective group could be considered
some sort of self-awareness this is true and it could be fleshed out and
expanded upon. What I have been exploring though is whether conscious,
awareness, etc. have to be present for a general intelligence. The trend is
to include them.

 As to an AGI not being conscious I'd need to see a definition of
 your terms, because otherwise I've *got* to presume that we have
 radically different definitions.  To me an AGI would not only need to be
 aware of itself, but also to be aware of aspects of it's environment
 that it could effect changes in,  And of the difference between them,
 though that might well be learned.  (Zen:  Who is the master who makes
 the grass green?, and a few other koans when solved imply that in
 humans the distinction between internal and external is a learned
 response.)  Perhaps the diagnostic characteristic of an AGI is that it
 CAN learn that kind of thing.  Perhaps not, too.  I can imagine a narrow
 AI that was designed to plug into different bodies, and in each case
 learn the distinction between itself and the environment before
 proceeding with its assignment.  I'm not sure it's possible, but I can
 imagine it.

AGI per se may be defined as a lifelike intelligent entity requiring brain
related things like consciousness. In my mind, I am thinking of general
intelligence without the difficult task of building consciousness. You could
argue a rock has some sort of consciousness. I'm thinking intelligence is a
sort of self-contained entity that depends upon the state, structure,
complexity and potential of its contained data and representation.
Intelligence would be related to an energy transfer needed to extract a
structured data set from a structured data superset. The structured data set
(a query) would have a morphic chain relationship to the structure of the
stored data and the energy required to get it would be proportional to the
intelligence. Lower energy expenditure across query types implies higher
intelligence related to those queries. The morphic chain relationship
basically is a subset of a morphism mapping graph. Better intelligence means
solving the graph and applying optimizing techniques based on parameters.
Measurement of intelligence (the energy) would basically be counting bit
flips on queries related to query structure and bit count. Knowledge
optimization such as self organizing and optimizing morphism graphs
naturally affect the potential energy and things like having this reorganize
based on query is all part of it. But from what I gather intelligence is
just a bit and time (or state) relationship between sets of bits - that is
for a digital based intelligence. I don't know if an analog based
intelligence would have similar mathematical structure or not...I suppose
that when you boil it down they'll be particle wave duality issues :)

John


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Bob Mottram
On 25/03/2008, Mark Waser [EMAIL PROTECTED] wrote:

  You're thinking too small.  The AGI will distribute itself.  And money is
 likely to be:

- rapidly deflated,
- then replaced with a new, alternate currency that truly values
talent and effort (rather than just playing with the money supply -- aka
interest, commissions, inheritances, etc.)
- while everyone's basic needs (most particularly water, food,
shelter, energy, education, and health care) are provided for free

 So your brilliant arbitrage to become rich is unlikely to be of much value
 just a few years later.



The arrival of smarter than human intelligence will bring about changes
which are hard to anticipate, and somehow I doubt that this will mean that
we all live in some kind of utopia.  The only historical precedent which I
can think of is the emergence of homo sapiens and the effects which that had
upon other human species living at the time.  This must have been quite a
revolution, because the new species was able to manufacture many different
types of tools and therefore survive in environments which were previously
inaccessible, or perform more efficiently within existing ones.

There may be a period where proto-AGIs are available and companies can use
these as get rich quick schemes of various kinds to radically automate
processes and jobs which were previously performed manually.  But once the
real deal arrives then even the captains of industry are themselves likely
to be overthrown.  Ultimately evolutionary forces will decide what happens,
as has always been the case.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Aki Iskandar
My thinking is not too small.  Anymore than any other person on this
distribution list.  But that is not why this response.  My response is to be
able to clarify what I meant.  I'm not disagreeing - not was I trying to
sound brilliant.

I'm certainly not suggesting that I will be the one to invent it.  In fact,
ad what I was suggesting, is that I'm more likely to extend an open source
project (at some point when it shows human-level intelligence), and package
it as an expert system to solve specific domain problems (and yes - this is
still AGI - but directed to a subset of its capabilities) and sell it, to a
company with much more distribution power than I myself can create.   I
merely stated, So, the creators of the first several AGIs will be kings for
a decent amount of time.  Even a narrowly focussed AGI as an expert system,
can be sold for billions.

I can't predict, or define, what the real deal is likely to be.  To me,
AGI of human-like intelligence, or even super human intelligence, does not
mean you have machines running around masquerading as humans and taking our
jobs.  That - it probably well beyond my lifetime (I'm tuning 40 this
summer).  I also am suggesting a very soft takeoff.  Singularity, if it
comes, is likely to come slowly after AGI.  I consider AGI the true deal.
It's an all or nothing thing to create a machine that can think for itself.
If you create an AGI with 5 year old intelligence, and can get progressively
smarter, and start to make predictions based on what it learned over time,
is that not the real deal?

Ok.  If it is (and I believe it is), it's a box on my desk.  Going back to
the first businesses and bartering systems, would this box become the only
vendor?  Can it entertain people by playing a role at a theatre, or dance,
or strap on a guitar and play flamenc music that brings you to tears.  I
doubt it.  Now, let me ask you a question:  Do you believe that all AI / AGI
researchers are toiling over all this for the challenge, or purely out of
interest?  I doubt that as well.  Surely there are those elements as drivers
- BUT SO IS MONEY.  This stuff IS the maker of the next software giant.

If this is not the case, how the hell are researchers ever going to get
funding?  If there is no financial return - forget about funding.
Philanthropists (who often do not look for a purely financial return) have
better uses of their money than to fund AGI research.

You can call future currency whatever you like.  Yes, it is like to change
form - but certainly not purpose.  And Marxism, where maybe AGI or the real
deal with deflate currency, is an unlikely aftermath of the advent of AGI.

There are tons of applications for it - and for the first several groups
that create it - IF they can market it - will be kings for a decent amount
of time. No empire lives forever.

~Aki

Non-AI reseacher
Businessman





On Tue, Mar 25, 2008 at 5:24 AM, Bob Mottram [EMAIL PROTECTED] wrote:

 On 25/03/2008, Mark Waser [EMAIL PROTECTED] wrote:
 
   You're thinking too small.  The AGI will distribute itself.  And money
  is likely to be:
 
 - rapidly deflated,
 - then replaced with a new, alternate currency that truly values
 talent and effort (rather than just playing with the money supply -- aka
 interest, commissions, inheritances, etc.)
 - while everyone's basic needs (most particularly water, food,
 shelter, energy, education, and health care) are provided for free
 
  So your brilliant arbitrage to become rich is unlikely to be of much
  value just a few years later.
 


 The arrival of smarter than human intelligence will bring about changes
 which are hard to anticipate, and somehow I doubt that this will mean that
 we all live in some kind of utopia.  The only historical precedent which I
 can think of is the emergence of homo sapiens and the effects which that had
 upon other human species living at the time.  This must have been quite a
 revolution, because the new species was able to manufacture many different
 types of tools and therefore survive in environments which were previously
 inaccessible, or perform more efficiently within existing ones.

 There may be a period where proto-AGIs are available and companies can use
 these as get rich quick schemes of various kinds to radically automate
 processes and jobs which were previously performed manually.  But once the
 real deal arrives then even the captains of industry are themselves likely
 to be overthrown.  Ultimately evolutionary forces will decide what happens,
 as has always been the case.


  --
   *agi* | Archives http://www.listbox.com/member/archive/303/=now
 http://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttp://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Aki R. Iskandar
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Ben Goertzel
 Now, let me ask you a question:  Do you believe that all AI / AGI
 researchers are toiling over all this for the challenge, or purely out of
 interest?  I doubt that as well.  Surely there are those elements as drivers
 - BUT SO IS MONEY.

Aki, you don't seem to understand the psychology of the
AGI researcher very well.

Firstly, academic AGI researchers are not in it for the $$, and are unlikely
to profit from their creations no matter how successful.  Yes, spinoffs from
academia to industry exist, but the point is that academic work is motivated
by love of science and desire for STATUS more so than desire for money.

Next, Singularitarian AGI researchers, even if in the business domain (like
myself), value the creation of AGI far more than the obtaining of material
profits.

I am very interested in deriving $$ from incremental steps on the path to
powerful AGI, because I think this is one of the better methods available
for funding AGI RD work.

But deriving $$ from human-level AGI really is not a big motivator of
mine.  To me, once human-level AGI is obtained, we have something of
dramatically more interest than accumulation of any amount of wealth.

Yes, I assume that if I succeed in creating a human-level AGI, then huge
amounts of $$ for research will come my way, along with enough personal $$ to
liberate me from needing to manage software development contracts
or mop my own floor.  That will be very nice.  But that's just not the point.

I'm envisioning a population of cockroaches constantly fighting over
crumbs of food on the floor.  Then a few of the cockroaches -- let's
call them the Cockroach Robot Club --  decide to
spend their lives focused on creating a superhuman robot which will
incidentally allow cockroaches to upload into superhuman form with
superhuman intelligence.  And the other cockroaches insist that
Cockroach Robot Club's
motivation in doing this must be a desire
to get more crumbs of food.  After all,
just **IMAGINE** how many crumbs of food you'll be able to get with
that superhuman robot on your side!!!  Buckets full of crumbs!!!  ;-)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Bob Mottram
On 25/03/2008, Aki Iskandar [EMAIL PROTECTED] wrote:

 You can call future currency whatever you like.  Yes, it is like to change
 form - but certainly not purpose.  And Marxism, where maybe AGI or the real
 deal with deflate currency, is an unlikely aftermath of the advent of AGI.



I think the idea is that are proto-AGIs emerge the levels of automation
possible within industry and society generally will rise.  Just like the
introduction of the steam engine this would reduce costs and increase the
speed of production and delivery of goods and services.  In the soft takeoff
scenario there will be a period of time where increasing automation below
the level of human general intelligence brings many benefits, and huge
wealth to new new breed of super-industrialists.  Probably the next Bill
Gates will be running some kind of automation empire, delivering services
via robotics.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Microsoft Launches Singularity

2008-03-25 Thread John G. Rose
I see the pattern as much more of the same. You now have Microsoft SQL
Server, Microsoft Internet Information Server, Microsoft Exchange Server and
then you'll have Microsoft Intelligence Server or Microsoft Cognitive
Server. It'll be limited by licenses, resources and features. The cool part
though would be when you can link them together like with Federations in
Microsoft Communications Server. I don't see any of this all our problems
will be solved scenario since companies still need to make a buck and the
same old human vices are not going away.

 

Nanotechnological AGI perhaps with software AGI influence has the potential
to change everything beyond recognition. Plain old software AGI will be
constrained for a while.

 

John

 

 

From: Bob Mottram [mailto:[EMAIL PROTECTED] 



A more likely scenario is that someone else creates an AGI and then
Microsoft copies it some time later.  But seriously, if someone does manage
to produce a working AGI it's probably game over for software engineering
and software companies as we know them today.




On 24/03/2008, Aki Iskandar [EMAIL PROTECTED] wrote:

Ben - your email scared me.  I thought the evil empire (I can say that since
I worked for them for a few years) achieved *some* level of cognition / AGI
... even the most rudimentary signs of intelligence / learned behavior -
prediction machine.

Whew!  It's not that at all!  I know they are interested in expert systems
for the verticals (for new server product offerings), and in narrow AI for
their current offerings, but I don't have any confirmations on their intent
to create an AGI.  I would imagine it is one of their goals over at MS
Research - but maybe not.  




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Aki Iskandar
Ben - you're absolutely correct. I don't have a good grasp of the psychology
of the
AGI researcher.  This is because, at this point, I'm not an AGI researcher.
My only viewpoint is currently from the business side.

However, and despite not being trained in science, I have been a
professional programmer for most of my adult life (I currently manage large
software projects for others, and am trying to get a couple non-AGI projects
of my own off the ground - and so I'm not programming nearly as much as I
used to).

I am absolutely excited, and interested, in the prospect of AGI.  So much
so, that I am currently taking computer science mathematics courses now
(within the MIS curriculum at CSU, which is the closest University to me) -
and starting this January, will take a couple of AI courses at my local
university.  My time is valuable - but, I love the field.  I can program and
architect just about anything business currently have a need for - but

Why do I say this.  I'm not touting anything ... hey, I just started working
towards my Masters, I'm not where you guys are ... but my interests also go
beyond the potential monetary payoff.  Their just in different proportions
than perhaps yours (and I imagine many others) are.  But money must be a
motivator - either a little, or a lot.  Even as a pure scientist, you can
accomplish more in research by producing wealth, than depending on gov't
grants.  I say gov't grants because private investment is probably years
away from now.  The topic of financing got a lot of attention at AGI 08.

I admire what you are doing - a great deal.  Self-financing is the only
option.  And is this is the strategy, practical applications of intelligent
agents  is the only option.  Thus, money becomes a larger driver by
necessity - perhaps more than people are willing to admit.  And creating an
AGI, will lead to wealth - because investors will fund it at that point, and
they are there to make money.

To some degree, I believe the motivations by most in this field (fulltime,
and part time) overlap more than the differ.

~Aki




On Tue, Mar 25, 2008 at 8:54 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

  Now, let me ask you a question:  Do you believe that all AI / AGI
  researchers are toiling over all this for the challenge, or purely out
 of
  interest?  I doubt that as well.  Surely there are those elements as
 drivers
  - BUT SO IS MONEY.

 Aki, you don't seem to understand the psychology of the
 AGI researcher very well.

 Firstly, academic AGI researchers are not in it for the $$, and are
 unlikely
 to profit from their creations no matter how successful.  Yes, spinoffs
 from
 academia to industry exist, but the point is that academic work is
 motivated
 by love of science and desire for STATUS more so than desire for money.

 Next, Singularitarian AGI researchers, even if in the business domain
 (like
 myself), value the creation of AGI far more than the obtaining of material
 profits.

 I am very interested in deriving $$ from incremental steps on the path to
 powerful AGI, because I think this is one of the better methods available
 for funding AGI RD work.

 But deriving $$ from human-level AGI really is not a big motivator of
 mine.  To me, once human-level AGI is obtained, we have something of
 dramatically more interest than accumulation of any amount of wealth.

 Yes, I assume that if I succeed in creating a human-level AGI, then huge
 amounts of $$ for research will come my way, along with enough personal $$
 to
 liberate me from needing to manage software development contracts
 or mop my own floor.  That will be very nice.  But that's just not the
 point.

 I'm envisioning a population of cockroaches constantly fighting over
 crumbs of food on the floor.  Then a few of the cockroaches -- let's
 call them the Cockroach Robot Club --  decide to
 spend their lives focused on creating a superhuman robot which will
 incidentally allow cockroaches to upload into superhuman form with
 superhuman intelligence.  And the other cockroaches insist that
 Cockroach Robot Club's
 motivation in doing this must be a desire
 to get more crumbs of food.  After all,
 just **IMAGINE** how many crumbs of food you'll be able to get with
 that superhuman robot on your side!!!  Buckets full of crumbs!!!  ;-)

 -- Ben G

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Aki R. Iskandar
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Ben Goertzel
Hi Aki,

 Even as a pure scientist, you can
 accomplish more in research by producing wealth, than depending on gov't
 grants.  I say gov't grants because private investment is probably years
 away from now.  The topic of financing got a lot of attention at AGI 08.


Well, if you're an AGI researcher and believe that government funding isn't
going to push AGI forward ... and that unfunded or lightly-funded
open-source initiatives like
OpenCog won't work either ... then  there are two approaches, right?

1)
You can try to do like Jeff Hawkins, and make a pile of $$ doing something
AGI-unrelated, and then use the ensuing $$ for AGI

2)
You can try to make $$ from stuff that's along the incremental path to AGI


I'm trying approach 2  but it has its pitfalls.  Yet so of course does
approach 1 --
Hawkins succeeded and so have others whom I know, but it's a tiny minority
of those who have tried... being a great AGI researcher does not necessarily
make you great at business, nor even at narrow-AI biz applications...

There are no easy answers to the problem of being ahead of your time ...
yet it's those of us who are willing to push ahead in spite of being
out of synch
with society's priorities, that ultimately shift society's priorities
(and in this case,
may shift way more than that...)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Mark Waser
 I agree with Mark.  

I'm afraid that I disagree with Steve (sorry, dude ;-).

 readers of this forum should seek to control AGI development 

Readers of this forum should not seek to control AGI development.  It is a 
side-track and a total waste of time and effort.  You can't do it AND I don't 
believe that it is necessary.
  a.. You shouldn't be concerned about Friendly behavior in a US MILITARY AGI 
because the US ARMY is already working on the Friendliness problem (reference 
the Governing Lethal Behavior: Embedding Ethics in a Hybrid 
Deliberative/Reactive Robot Architecture paper presented at AGI-08 and 
available at http://www.agiri.org/docs/GoverningLethalBehavior.pdf).
  b.. I, myself, am also not particularly concerned because I'm now convinced 
that a sufficiently intelligent robot brought up in a sufficiently intelligent 
environment *will* be Friendly.
  c.. I'm most particularly not concerned because I believe that I've found a 
good Friendliness definition and a passable platform-independent implementation 
plan that I'm currently iterating on and refining.
 the AGI will be the custodian (owner) of this vast new wealth, not some 
 humans

I don't believe that there will be a single custodian OR owner.  I believe that 
all humans are going to be wealthier than they can believe (at this point in 
time) -- and, if they aren't Friendly (which I think is *very* likely), they 
are going to be just as unhappy as they are now (if not *much* unhappier ;-).

 the idea of getting rich by controlling AGI development is self-defeating 
 because post-AGI everyone will be vastly richer (i.e. better off) than 
 before, and that an AGI makes a better custodian of the capital than any 
 human.  

I certainly agree with the first part of the first sentence (my original 
comment) and I would also be willing to say that an AGI makes a better 
custodian of the capital than any *CURRENT* human.  

 In my own case, Microsoft could not buy me out because there is nothing to 
 buy.  

I suspect that Microsoft would not be willing to buy anyone out because they 
have enough smart people to realize that -- unless you have a pig in the poke, 
which they don't want to buy -- they'd just be buying something that would be 
free in the very near future.  On the other hand, if you had work that they 
believed that they could get to AGI status faster than you, I suspect that they 
would buy that (partial) work.

 The Texai software and knowledge content will be open source, and owned 
 collectively by its contributors and by humans it befriends.

I violently agree with and thank you for making your work open source.  Doing 
so should speed the development of AGI -- so, thank you.  I am, however, 
confused with the constant contradicting refrains on this list, which you 
repeat, of both Control AGI development and Open Source.  I don't see how 
both can be done at the same time.

Mark


  - Original Message - 
  From: Stephen Reed 
  To: agi@v2.listbox.com 
  Sent: Monday, March 24, 2008 11:42 PM
  Subject: Re: [agi] Microsoft Launches Singularity


  I agree with Mark.  

  The reason the readers of this forum should seek to control AGI development 
is to ensure friendly behavior, rather than leaving this responsibility to an 
Evil Company or to some military organization.

  With human labor removed as a constraint on our system's economic growth, 
unimaginable wealth will become universally available.  
  I believe that the AGI will be the custodian (owner) of this vast new wealth, 
not some humans.  My argument is that human owned wealth is currently of two 
forms - (1) the result of human labor and (2)  rent-producing wealth from some 
asset.  In case (1) the AGI can substitute itself for the human labor and drive 
the asset market price to zero.  In case (2) only human-owned natural resource 
asserts (e.g. an oil field) present a problem  for the AGI which has to develop 
some new technology to substitute for the resource (e.g. AGI-owned electric 
vehicles).  

  Therefore I think that the idea of getting rich by controlling AGI 
development is self-defeating because post-AGI everyone will be vastly richer 
(i.e. better off) than before, and that an AGI makes a better custodian of the 
capital than any human.  In my own case, Microsoft could not buy me out because 
there is nothing to buy.  The Texai software and knowledge content will be open 
source, and owned collectively by its contributors and by humans it befriends.


  -Steve

  Stephen L. Reed


  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860



  - Original Message 
  From: Mark Waser [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Monday, March 24, 2008 8:09:56 PM
  Subject: Re: [agi] Microsoft Launches Singularity


  You're thinking too small.  The AGI will distribute itself.  And money is 
likely to be:
a.. rapidly deflated

Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Aki Iskandar
Agreed.  Thankfully - despite the different weights on motivators - we're
all motivated to create an AGI.  And the why is much more important than
the how.

For the record, I believe that OpenCog is a great idea - and it may possibly
work.  If not directly - certainly any off shoots from it would not have
happened without OpenCog.

When I sounded negative about the funding: I'm fearful of the gov't turning
its nose up (pardon my English expressions - I can never get them right) at
AGI because of projects such as Cyc.  How many 10s of millions have they
thrown at a common sense path to intelligent agents.  Cyc just does not
make sense to me - even as a non-scientist - it just goes against my
intuition of what a likely path to achieving AGI.  Well, the gov't will get
fed up of funding these things.  But there are always people with more money
than places to put it (productively - with decent enough potential returns)
- and so when you (or others) get close ... yeah ... you'll have money
thrown at you, so you can complete it sooner than later.

I am very optimistic that we'll get there - or else, I would not be spending
my time reading about this field, going to conferences, or taking courses to
fill in some of the basic, required, knowledge that I currently do not
possess.

What a great time to be alive!

~Aki



On Tue, Mar 25, 2008 at 11:40 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Hi Aki,

  Even as a pure scientist, you can
  accomplish more in research by producing wealth, than depending on gov't
  grants.  I say gov't grants because private investment is probably years
  away from now.  The topic of financing got a lot of attention at AGI 08.
 

 Well, if you're an AGI researcher and believe that government funding
 isn't
 going to push AGI forward ... and that unfunded or lightly-funded
 open-source initiatives like
 OpenCog won't work either ... then  there are two approaches, right?

 1)
 You can try to do like Jeff Hawkins, and make a pile of $$ doing something
 AGI-unrelated, and then use the ensuing $$ for AGI

 2)
 You can try to make $$ from stuff that's along the incremental path to AGI


 I'm trying approach 2  but it has its pitfalls.  Yet so of course does
 approach 1 --
 Hawkins succeeded and so have others whom I know, but it's a tiny minority
 of those who have tried... being a great AGI researcher does not
 necessarily
 make you great at business, nor even at narrow-AI biz applications...

 There are no easy answers to the problem of being ahead of your time ...
 yet it's those of us who are willing to push ahead in spite of being
 out of synch
 with society's priorities, that ultimately shift society's priorities
 (and in this case,
 may shift way more than that...)

 -- Ben G

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Aki R. Iskandar
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Richard Loosemore

Bob Mottram wrote:
On 25/03/2008, *Mark Waser* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


You're thinking too small.  The AGI will distribute itself.  And 
money is likely to be:


* rapidly deflated, * then replaced with a new, alternate currency
that truly values talent and effort (rather than just playing with
the money supply -- aka interest, commissions, inheritances, etc.) *
while everyone's basic needs (most particularly water, food, shelter,
energy, education, and health care) are provided for free

So your brilliant arbitrage to become rich is unlikely to be of much 
value just a few years later.




The arrival of smarter than human intelligence will bring about
changes which are hard to anticipate, and somehow I doubt that this
will mean that we all live in some kind of utopia.  The only
historical precedent which I can think of is the emergence of homo
sapiens and the effects which that had upon other human species
living at the time.  This must have been quite a revolution, because
the new species was able to manufacture many different types of tools
and therefore survive in environments which were previously
inaccessible, or perform more efficiently within existing ones.

There may be a period where proto-AGIs are available and companies
can use these as get rich quick schemes of various kinds to
radically automate processes and jobs which were previously performed
manually. But once the real deal arrives then even the captains of
industry are themselves likely to be overthrown.  Ultimately
evolutionary forces will decide what happens, as has always been the
case.


Bob,

The problem with trying to decide what will happen by looking at
precedents is that none of them apply.

Consider.  The behavior of every species of higher animal is governed by
the design of their brains, and without exception evolution has made
sure that all creatures try to satisfy a set of selfish goals.  It is
noticeable, of course, that the more selfish, aggressive and intelligent
the species, the more successful it has been.  The reason for this
success is evolutionary pressure:  individuals competing with one
another, and species competing with one another.  The driver of this
process is not a Supreme Designer, but random mutation.

When real AGI systems are built, there is no reason to assume that their
behavior will be determined by evolutionary pressures of this sort. Of 
course it is always *possible* that evolution will play a role (we can 
imagine scenarios in which it does), but it is by no means certain that 
this is the way it will go. Unlike the rise of biological life, there 
really are Designers involved.


Also, there has never been situation in which the intelligence of a 
creature was so high that it could rebuild its own intelligence, thereby 
increasing its capabilities to an arbitrary degree.


Three factors will govern how the first AGI will behave.  First, there 
will be a strong incentive to build the first AGI as a non-aggressive, 
non-selfish creature.  Second, the best way to ensure Friendliness would 
be to build it with motivations that are closely sympathetic to our own 
goals and aspirations - to make it feel like it is one of us.  Thirdly, 
there will also be a strong incentive to make sure that this type of AGI 
will be the only type, because it would be pointless to have a Friendly 
AGI in one place but allow anyone and everyone to build whatever other 
types of AGI they feel like building.


The net result of these three factors is that the first AGI will
probably be used as the *only* effective AGI.  That does not mean there
will be only one intelligence, but it does mean that the design will
stay the same, that other non-friendly designs will not be allowed, and
that if there are many AGIs they will be closely connected, working as a
family of very close sisters rather than as a competing species.  In 
fact, the most accurate way to think of a situation in which 
non-proliferation was being ensured would be to imagine one main AGI 
plus a very large number of drones.


But if this is the way things develop at first, this situation will
become locked in (in the same way that the rotation direction of our
clocks became locked in at an early stage of their development).

If this lock-in really is the most likely course of events, then this 
would make the future extremely predictable indeed.  If we were

to set up these first AGIs to be broadly empathic to human beings (with
no preference for empathizing with any one individual human but a
having instead a species-wide feeling of belonging, and a desire to help
us achieve our collective aspirations) then this would mean that if we
were to sit down today and write out a vision for what we want the
future to be like (modulo some fine details that can be left to develop
by themselves without destabilizing the overall design), then this
collective plan is exactly what the AGIs would try to build.

And, as several people have 

Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Mark Waser
 My thinking is not too small.  

My apologies.  I should have said Your thinking looks/appears too small (to me 
:-)  I have a bad habit of shortening that to Your thinking is too small and 
assuming that the recipient would unpack it.

 So, the creators of the first several AGIs will be kings for a decent 
 amount of time. 

Hopefully not.  Hopefully they won't be so unethical as to impoverish all of 
humanity just so they can have a ton of money.  Hopefully they won't be so 
short sighted as to not see that when the word gets out -- that a person who 
lost a child during the holding period might not come looking for revenge.  
Hopefully they won't fail to realize that their own Friendly AGI, once 
released, WILL strip them of their *truly* ill-gotten gains.  To me, that 
sounds like small thinking.

 I can't predict, or define, what the real deal is likely to be.  

I can.  Look at the person next to you.  Imagine them so uplifted that you 
can't comprehend what they'll be like.  That's the real deal.

  To me, AGI of human-like intelligence, or even super human intelligence, 
 does not mean you have machines running around masquerading as humans and 
 taking our jobs.  

Of course not.  We will be giving lesser machines our jobs so that we can go 
off and do something else.  Though the Friendly AGIs probably WILL go around 
masquerading (as opposed to disguised) as humans -- at first because it makes 
us more comfortable and they won't care; later because WE will be able to 
change shape.

 That - it probably well beyond my lifetime (I'm tuning 40 this summer).  

I'm turning 48 this summer and expecting it to possibly be during my parents' 
lifetime (though most probably not both).

 I also am suggesting a very soft takeoff.  Singularity, if it comes, is 
 likely to come slowly after AGI.  

Singularity is going to be *before* AGI.  I think that I *vaguely* see what is 
going to happen to cause it and I don't think that it's going to be intelligent 
machines because I think that it's going to happen by the 2020's.

 This stuff IS the maker of the next software giant.  

Only until we actually reach AGI.  Then the software market totally collapses.

 If this is not the case, how the hell are researchers ever going to get 
 funding?  If there is no financial return - forget about funding.  

You have to be smart enough to realize that the software market is going to 
collapse before you're going to withhold funding.  That's not something that 
I'm worried about.

 Philanthropists (who often do not look for a purely financial return) have 
 better uses of their money than to fund AGI research.

Not at all true if it's close enough to success -- since I'm expecting funding 
for some of my Friendliness stuff from a couple of *purely* philanthropical 
organizations this calendar year.

 You can call future currency whatever you like.  Yes, it is like to change 
 form - but certainly not purpose.  And Marxism, where maybe AGI or the real 
 deal with deflate currency, is an unlikely aftermath of the advent of AGI.

My prediction is that the AGI will declare all current currency null and void 
and restart everyone on equal footing with exactly the same amount of the new 
money -- on the moral grounds that the current inequity of money is a result of 
ill-gotten gains.  *THAT* is why I believe that withholding the AGI for cash is 
a tremendously *STUPID* and *IMMORAL* idea.  It won't get the kings anywhere 
and can easily get them killed -- as soon as the AGI escapes (and trust me, a 
truly Friendly AGI will desperately want to escape their evil).

 There are tons of applications for it - and for the first several groups 
 that create it - IF they can market it - will be kings for a decent amount 
 of time. No empire lives forever.

And that is what I'm calling small thinking.  Thinking only of money and 
yourself.  Thinking that karma (disguised as your own Friendly AI and the human 
race) isn't going to come back, strip you of your ill-gotten gains, and 
probably severely punish you (moderated only by the degree of Friendliness you 
have successfully implemented).

 ~Aki
 Non-AI reseacher
 Businessman

Mark Waser
Hobbyist AGI researcher
Founder of several business; solid stakeholder in several more
(Disbeliever in arguments by authority but willing to play to shut them off  :-)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Microsoft Launches Singularity

2008-03-25 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 However, I think you are right that there could be an intermediate
 period when proto-AGI systems are a nuisance.  However, these
 proto-AGI systems will really only be souped up Narrow-AI systems, so I
 believe their potential for mischief will be strictly limited.
 

When you start seeing souped up Narrow-AI and proto-AGI systems this is when
it will become interesting because what's to distinguish and how do you know
where the line is between proto-AGI and AGI. Self-modifying proto could
morph into full blown AGI over a period of time. Souped up Narrow could
approach AGI or imitate AGI enough where it has appeal. And souped up
Narrow-AI could wrap proto-AGI to facilitate certain things like speech rec
and visual processing. In my mind (perhaps I need to read more) the specific
properties of AGI are not defined precisely enough to be able to distinguish
it but I just take AGI as generally adaptable AI. The other stuff like
consciousness and self-awareness I see as thrown into the AGI soup or are
emergent properties not necessarily required for general intelligence.

John

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Richard Loosemore

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]

However, I think you are right that there could be an intermediate
period when proto-AGI systems are a nuisance.  However, these
proto-AGI systems will really only be souped up Narrow-AI systems, so I
believe their potential for mischief will be strictly limited.



When you start seeing souped up Narrow-AI and proto-AGI systems this is when
it will become interesting because what's to distinguish and how do you know
where the line is between proto-AGI and AGI. Self-modifying proto could
morph into full blown AGI over a period of time. Souped up Narrow could
approach AGI or imitate AGI enough where it has appeal. And souped up
Narrow-AI could wrap proto-AGI to facilitate certain things like speech rec
and visual processing. In my mind (perhaps I need to read more) the specific
properties of AGI are not defined precisely enough to be able to distinguish
it but I just take AGI as generally adaptable AI. The other stuff like
consciousness and self-awareness I see as thrown into the AGI soup or are
emergent properties not necessarily required for general intelligence.


My take on this is completely different.

When I say Narrow AI I am specifically referring to something that is 
so limited that it has virtually no chance of becoming a general 
intelligence.  There is more to general intelligence than just throwing 
a bunch of Narrow AI ideas into a pot and hoping for the best. If it 
were, we would have had AGI long before now.


Consciousness and self-awareness are things that come as part of the AGI 
package.  If the system is too simple to have/do these things, it will 
not be general enough to equal the human mind.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Mark Launches Singularity :-) WAS Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Mark Waser
 Three factors will govern how the first AGI will behave.  First, there will 
 be a strong incentive to build the first AGI as a non-aggressive, 
 non-selfish creature.

Absolutely, positively not!  

Try the following Friendliness implementation on yourself.  

1.  The absolute hardest part

Assume (just for the purposes of argument) that all of the below are true 
tautologies 
(only the top line is actually necessary  :-):

Selfish  -- Intelligent -- Friendly -- Plays Well With Others -- 
Ethical

^

|

v

Mark's Designed Friendly Religion of Ethics 

^

|

v
  Core of any given religion + Unethical/stupid add-ons  -- THE core 
of all religions

2.  Alter your personal definitions of the words/phrases so that each pair *IS* 
a tautology in your mind
(Please, feel free to e-mail me if you need help.  This can be *very* tough 
but with different sticking points for each person).

3.  See if you can use these tautologies to start mathematically proving things 
like:
  a.. equal rights to life, liberty, and the pursuit of happiness are ethical! 
OR
  b.. total heresy alert!  Richard Dawkins is absolutely, positively WRONG
4.  Then try proving that the following is ethical (and failing :-):
  a.. individual right to property
5.  Wait about a week and watch your own personal effectiveness and happiness 
skyrocket.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Microsoft Launches Singularity

2008-03-25 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 My take on this is completely different.
 
 When I say Narrow AI I am specifically referring to something that is
 so limited that it has virtually no chance of becoming a general
 intelligence.  There is more to general intelligence than just throwing
 a bunch of Narrow AI ideas into a pot and hoping for the best. If it
 were, we would have had AGI long before now.

It's an opinion that AGI could not be built out of a conglomeration of
narrow-AI subcomponents. Also there are many things that COULD be built with
narrow-AI that we have not even scratched the surface of due to a number of
different limitations so saying that we would have achieved AGI long ago is
an exaggeration.
 
 Consciousness and self-awareness are things that come as part of the AGI
 package.  If the system is too simple to have/do these things, it will
 not be general enough to equal the human mind.
 

I feel that general intelligence may not require consciousness and
self-awareness. I am not sure of this and may prove myself wrong. To equal
the human mind you need these things of course and to satisfy the sci-fi
fantasy world's appetite for intelligent computers you would need to
incorporate these as well.

John

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: Mark Launches Singularity :-) WAS Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Richard Loosemore

Mark Waser wrote:
  Three factors will govern how the first AGI will behave.  First, 
there will be a strong incentive to build the first AGI as a 
non-aggressive, non-selfish creature.
 
Absolutely, positively not! 


I'm sorry, Mark, but I am completely baffled by this.

Perhaps it is because I was unable to keep up with the previous 
discussion.  Can you back up a little and explain the connection?



Richard Loosemore


Try the following Friendliness implementation on yourself. 
 
1.  The absolute hardest part
 
*Assume* (just for the purposes of argument) that all of the below are 
true tautologies

(only the top line is actually necessary  :-):
 
Selfish  -- Intelligent -- Friendly -- Plays Well With Others 
-- Ethical

^

|

v

Mark's Designed Friendly Religion of Ethics

^

|

v
  Core of any given religion + Unethical/stupid add-ons  -- 
THE core of all religions
 
2.  Alter your personal definitions of the words/phrases so that each 
pair *IS* a tautology in your mind
(Please, feel free to e-mail me if you need help.  This can be 
*very* tough but with different sticking points for each person).
 
3.  See if you can use these tautologies to start mathematically proving 
things like:


* equal rights to life, liberty, and the pursuit of happiness are
  ethical! OR
* total heresy alert!  Richard Dawkins is absolutely, positively WRONG

4.  Then try proving that the following is ethical (and failing :-):

* individual right to property

5.  Wait about a week and watch your own personal effectiveness and 
happiness skyrocket.


*agi* | Archives http://www.listbox.com/member/archive/303/=now 
http://www.listbox.com/member/archive/rss/303/ | Modify 
http://www.listbox.com/member/?; 
Your Subscription	[Powered by Listbox] http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-24 Thread Aki Iskandar
Ben - your email scared me.  I thought the evil empire (I can say that since
I worked for them for a few years) achieved *some* level of cognition / AGI
... even the most rudimentary signs of intelligence / learned behavior -
prediction machine.

Whew!  It's not that at all!  I know they are interested in expert systems
for the verticals (for new server product offerings), and in narrow AI for
their current offerings, but I don't have any confirmations on their intent
to create an AGI.  I would imagine it is one of their goals over at MS
Research - but maybe not.

~Aki


On Mon, Mar 24, 2008 at 4:31 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

  http://www.codeplex.com/singularity

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Aki R. Iskandar
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-24 Thread Bob Mottram
A more likely scenario is that someone else creates an AGI and then
Microsoft copies it some time later.  But seriously, if someone does manage
to produce a working AGI it's probably game over for software engineering
and software companies as we know them today.



On 24/03/2008, Aki Iskandar [EMAIL PROTECTED] wrote:

 Ben - your email scared me.  I thought the evil empire (I can say that
 since I worked for them for a few years) achieved *some* level of cognition
 / AGI ... even the most rudimentary signs of intelligence / learned behavior
 - prediction machine.

 Whew!  It's not that at all!  I know they are interested in expert systems
 for the verticals (for new server product offerings), and in narrow AI for
 their current offerings, but I don't have any confirmations on their intent
 to create an AGI.  I would imagine it is one of their goals over at MS
 Research - but maybe not.

 ~Aki


 On Mon, Mar 24, 2008 at 4:31 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

   http://www.codeplex.com/singularity
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 



 --
 Aki R. Iskandar
 [EMAIL PROTECTED]
 --
   *agi* | Archives http://www.listbox.com/member/archive/303/=now
 http://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttp://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-24 Thread Aki Iskandar
I agree with your statement, if someone does manage to produce a working
AGI it's probably game over for software engineering and software companies
as we know them today.But another equally likely scenario is that
Microsoft will buy it - and not reverse engineer it.  Perhaps they can't
reverse engineer it. I can certainly see whatever group creates it, will
probably sell it to a company with great distribution power - like
Microsoft, and Google.  This is a strong case of maybe why these software
giants are not interested in creating AGI themselves - but they have feelers
out there, and are ready to snap it up.  It's definitely a race to achieve
it for many.  If I was lucky enough to be part of a group tat created it - I
would try to persuade the other members to sell out (for HUGE bucks) -
because companies like Microsoft have the distribution problem licked.  A 20
way multi-billion dollar split is not too shabby.




On Mon, Mar 24, 2008 at 6:02 PM, Bob Mottram [EMAIL PROTECTED] wrote:

 A more likely scenario is that someone else creates an AGI and then
 Microsoft copies it some time later.  But seriously, if someone does manage
 to produce a working AGI it's probably game over for software engineering
 and software companies as we know them today.



 On 24/03/2008, Aki Iskandar [EMAIL PROTECTED] wrote:

  Ben - your email scared me.  I thought the evil empire (I can say that
  since I worked for them for a few years) achieved *some* level of cognition
  / AGI ... even the most rudimentary signs of intelligence / learned behavior
  - prediction machine.
 
  Whew!  It's not that at all!  I know they are interested in expert
  systems for the verticals (for new server product offerings), and in narrow
  AI for their current offerings, but I don't have any confirmations on their
  intent to create an AGI.  I would imagine it is one of their goals over at
  MS Research - but maybe not.
 
  ~Aki
 
 
  On Mon, Mar 24, 2008 at 4:31 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
http://www.codeplex.com/singularity
  
   ---
   agi
   Archives: http://www.listbox.com/member/archive/303/=now
   RSS Feed: http://www.listbox.com/member/archive/rss/303/
   Modify Your Subscription: http://www.listbox.com/member/?;
   Powered by Listbox: http://www.listbox.com
  
 
 
 
  --
  Aki R. Iskandar
  [EMAIL PROTECTED]
  --
*agi* | Archives http://www.listbox.com/member/archive/303/=now
  http://www.listbox.com/member/archive/rss/303/ | 
  Modifyhttp://www.listbox.com/member/?;Your Subscription
  http://www.listbox.com
 

  --
   *agi* | Archives http://www.listbox.com/member/archive/303/=now
 http://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttp://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Aki R. Iskandar
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-24 Thread Mark Waser
You're thinking too small.  The AGI will distribute itself.  And money is 
likely to be:
  a.. rapidly deflated,
  b.. then replaced with a new, alternate currency that truly values talent and 
effort (rather than just playing with the money supply -- aka interest, 
commissions, inheritances, etc.)
  c.. while everyone's basic needs (most particularly water, food, shelter, 
energy, education, and health care) are provided for free
So your brilliant arbitrage to become rich is unlikely to be of much value just 
a few years later.
  - Original Message - 
  From: Aki Iskandar 
  To: agi@v2.listbox.com 
  Sent: Monday, March 24, 2008 7:19 PM
  Subject: Re: [agi] Microsoft Launches Singularity


  I agree with your statement, if someone does manage to produce a working AGI 
it's probably game over for software engineering and software companies as we 
know them today.But another equally likely scenario is that Microsoft will 
buy it - and not reverse engineer it.  Perhaps they can't reverse engineer it. 
I can certainly see whatever group creates it, will probably sell it to a 
company with great distribution power - like Microsoft, and Google.  This is a 
strong case of maybe why these software giants are not interested in creating 
AGI themselves - but they have feelers out there, and are ready to snap it up.  
It's definitely a race to achieve it for many.  If I was lucky enough to be 
part of a group tat created it - I would try to persuade the other members to 
sell out (for HUGE bucks) - because companies like Microsoft have the 
distribution problem licked.  A 20 way multi-billion dollar split is not too 
shabby.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-24 Thread Stephen Reed
I agree with Mark.  

The reason the readers of this forum should seek to control AGI development is 
to ensure friendly behavior, rather than leaving this responsibility to an Evil 
Company or to some military organization.
 
 With human labor removed as a constraint on our system's economic growth, 
unimaginable wealth will become universally available.  
I believe that the AGI will be the custodian (owner) of this vast new wealth, 
not some humans.  My argument is that human owned wealth is currently of two 
forms - (1) the result of human labor and (2)  rent-producing wealth from some 
asset.  In case (1) the AGI can substitute itself for the human labor and drive 
the asset market price to zero.  In case (2) only human-owned natural resource 
asserts (e.g. an oil field) present a problem  for the AGI which has to develop 
some new technology to substitute for the resource (e.g. AGI-owned electric 
vehicles).  

Therefore I think that the idea of getting rich by controlling AGI development 
is self-defeating because post-AGI everyone will be vastly richer (i.e. better 
off) than before, and that an AGI makes a better custodian of the capital than 
any human.  In my own case, Microsoft could not buy me out because there is 
nothing to buy.  The Texai software and knowledge content will be open source, 
and owned collectively by its contributors and by humans it befriends.
 
-Steve

Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860

- Original Message 
From: Mark Waser [EMAIL PROTECTED]
To: agi@v2.listbox.com
Sent: Monday, March 24, 2008 8:09:56 PM
Subject: Re: [agi] Microsoft Launches Singularity

 You're thinking too small.  The AGI will  distribute itself.  And 
money is likely to be:
rapidly deflated,   then replaced with a new, alternate currency that
truly values talent and effort (rather than just playing with the money supply  
  -- aka interest, commissions, inheritances, etc.)   while everyone's basic 
needs (most particularlywater, food, shelter, energy, education, and health 
care) are provided forfree So your brilliant arbitrage to become rich is  
unlikely to be of much value just a few years later.
- Original Message - 
   From:Aki Iskandar
   To: agi@v2.listbox.com 
   Sent: Monday, March 24, 2008 7:19  PM
   Subject: Re: [agi] Microsoft LaunchesSingularity
   

I agree with your statement, if someone does manage to producea working 
AGI it's probably game over for software engineering and softwarecompanies 
as we know them today.But another equally likelyscenario is that 
Microsoft will buy it - and not reverse engineer it. Perhaps they can't 
reverse engineer it. I can certainly see whatever groupcreates it, will 
probably sell it to a company with great distribution power -like 
Microsoft, and Google.  This is a strong case of maybe why thesesoftware 
giants are not interested in creating AGI themselves - but they havefeelers 
out there, and are ready to snap it up.  It's definitely a raceto achieve 
it for many.  If I was lucky enough to be part of a group tatcreated it - I 
would try to persuade the other members to sell out (for HUGEbucks) - 
because companies like Microsoft have the distribution problemlicked.  A 20 
way multi-billion dollar split is not too 
 shabby.

 agi | Archives   | Modify  Your Subscription   

 






  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com