Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Bob Mottram
On 25/03/2008, Mark Waser [EMAIL PROTECTED] wrote:

  You're thinking too small.  The AGI will distribute itself.  And money is
 likely to be:

- rapidly deflated,
- then replaced with a new, alternate currency that truly values
talent and effort (rather than just playing with the money supply -- aka
interest, commissions, inheritances, etc.)
- while everyone's basic needs (most particularly water, food,
shelter, energy, education, and health care) are provided for free

 So your brilliant arbitrage to become rich is unlikely to be of much value
 just a few years later.



The arrival of smarter than human intelligence will bring about changes
which are hard to anticipate, and somehow I doubt that this will mean that
we all live in some kind of utopia.  The only historical precedent which I
can think of is the emergence of homo sapiens and the effects which that had
upon other human species living at the time.  This must have been quite a
revolution, because the new species was able to manufacture many different
types of tools and therefore survive in environments which were previously
inaccessible, or perform more efficiently within existing ones.

There may be a period where proto-AGIs are available and companies can use
these as get rich quick schemes of various kinds to radically automate
processes and jobs which were previously performed manually.  But once the
real deal arrives then even the captains of industry are themselves likely
to be overthrown.  Ultimately evolutionary forces will decide what happens,
as has always been the case.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Aki Iskandar
My thinking is not too small.  Anymore than any other person on this
distribution list.  But that is not why this response.  My response is to be
able to clarify what I meant.  I'm not disagreeing - not was I trying to
sound brilliant.

I'm certainly not suggesting that I will be the one to invent it.  In fact,
ad what I was suggesting, is that I'm more likely to extend an open source
project (at some point when it shows human-level intelligence), and package
it as an expert system to solve specific domain problems (and yes - this is
still AGI - but directed to a subset of its capabilities) and sell it, to a
company with much more distribution power than I myself can create.   I
merely stated, So, the creators of the first several AGIs will be kings for
a decent amount of time.  Even a narrowly focussed AGI as an expert system,
can be sold for billions.

I can't predict, or define, what the real deal is likely to be.  To me,
AGI of human-like intelligence, or even super human intelligence, does not
mean you have machines running around masquerading as humans and taking our
jobs.  That - it probably well beyond my lifetime (I'm tuning 40 this
summer).  I also am suggesting a very soft takeoff.  Singularity, if it
comes, is likely to come slowly after AGI.  I consider AGI the true deal.
It's an all or nothing thing to create a machine that can think for itself.
If you create an AGI with 5 year old intelligence, and can get progressively
smarter, and start to make predictions based on what it learned over time,
is that not the real deal?

Ok.  If it is (and I believe it is), it's a box on my desk.  Going back to
the first businesses and bartering systems, would this box become the only
vendor?  Can it entertain people by playing a role at a theatre, or dance,
or strap on a guitar and play flamenc music that brings you to tears.  I
doubt it.  Now, let me ask you a question:  Do you believe that all AI / AGI
researchers are toiling over all this for the challenge, or purely out of
interest?  I doubt that as well.  Surely there are those elements as drivers
- BUT SO IS MONEY.  This stuff IS the maker of the next software giant.

If this is not the case, how the hell are researchers ever going to get
funding?  If there is no financial return - forget about funding.
Philanthropists (who often do not look for a purely financial return) have
better uses of their money than to fund AGI research.

You can call future currency whatever you like.  Yes, it is like to change
form - but certainly not purpose.  And Marxism, where maybe AGI or the real
deal with deflate currency, is an unlikely aftermath of the advent of AGI.

There are tons of applications for it - and for the first several groups
that create it - IF they can market it - will be kings for a decent amount
of time. No empire lives forever.

~Aki

Non-AI reseacher
Businessman





On Tue, Mar 25, 2008 at 5:24 AM, Bob Mottram [EMAIL PROTECTED] wrote:

 On 25/03/2008, Mark Waser [EMAIL PROTECTED] wrote:
 
   You're thinking too small.  The AGI will distribute itself.  And money
  is likely to be:
 
 - rapidly deflated,
 - then replaced with a new, alternate currency that truly values
 talent and effort (rather than just playing with the money supply -- aka
 interest, commissions, inheritances, etc.)
 - while everyone's basic needs (most particularly water, food,
 shelter, energy, education, and health care) are provided for free
 
  So your brilliant arbitrage to become rich is unlikely to be of much
  value just a few years later.
 


 The arrival of smarter than human intelligence will bring about changes
 which are hard to anticipate, and somehow I doubt that this will mean that
 we all live in some kind of utopia.  The only historical precedent which I
 can think of is the emergence of homo sapiens and the effects which that had
 upon other human species living at the time.  This must have been quite a
 revolution, because the new species was able to manufacture many different
 types of tools and therefore survive in environments which were previously
 inaccessible, or perform more efficiently within existing ones.

 There may be a period where proto-AGIs are available and companies can use
 these as get rich quick schemes of various kinds to radically automate
 processes and jobs which were previously performed manually.  But once the
 real deal arrives then even the captains of industry are themselves likely
 to be overthrown.  Ultimately evolutionary forces will decide what happens,
 as has always been the case.


  --
   *agi* | Archives http://www.listbox.com/member/archive/303/=now
 http://www.listbox.com/member/archive/rss/303/ | 
 Modifyhttp://www.listbox.com/member/?;Your Subscription
 http://www.listbox.com




-- 
Aki R. Iskandar
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: 

Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Ben Goertzel
 Now, let me ask you a question:  Do you believe that all AI / AGI
 researchers are toiling over all this for the challenge, or purely out of
 interest?  I doubt that as well.  Surely there are those elements as drivers
 - BUT SO IS MONEY.

Aki, you don't seem to understand the psychology of the
AGI researcher very well.

Firstly, academic AGI researchers are not in it for the $$, and are unlikely
to profit from their creations no matter how successful.  Yes, spinoffs from
academia to industry exist, but the point is that academic work is motivated
by love of science and desire for STATUS more so than desire for money.

Next, Singularitarian AGI researchers, even if in the business domain (like
myself), value the creation of AGI far more than the obtaining of material
profits.

I am very interested in deriving $$ from incremental steps on the path to
powerful AGI, because I think this is one of the better methods available
for funding AGI RD work.

But deriving $$ from human-level AGI really is not a big motivator of
mine.  To me, once human-level AGI is obtained, we have something of
dramatically more interest than accumulation of any amount of wealth.

Yes, I assume that if I succeed in creating a human-level AGI, then huge
amounts of $$ for research will come my way, along with enough personal $$ to
liberate me from needing to manage software development contracts
or mop my own floor.  That will be very nice.  But that's just not the point.

I'm envisioning a population of cockroaches constantly fighting over
crumbs of food on the floor.  Then a few of the cockroaches -- let's
call them the Cockroach Robot Club --  decide to
spend their lives focused on creating a superhuman robot which will
incidentally allow cockroaches to upload into superhuman form with
superhuman intelligence.  And the other cockroaches insist that
Cockroach Robot Club's
motivation in doing this must be a desire
to get more crumbs of food.  After all,
just **IMAGINE** how many crumbs of food you'll be able to get with
that superhuman robot on your side!!!  Buckets full of crumbs!!!  ;-)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Bob Mottram
On 25/03/2008, Aki Iskandar [EMAIL PROTECTED] wrote:

 You can call future currency whatever you like.  Yes, it is like to change
 form - but certainly not purpose.  And Marxism, where maybe AGI or the real
 deal with deflate currency, is an unlikely aftermath of the advent of AGI.



I think the idea is that are proto-AGIs emerge the levels of automation
possible within industry and society generally will rise.  Just like the
introduction of the steam engine this would reduce costs and increase the
speed of production and delivery of goods and services.  In the soft takeoff
scenario there will be a period of time where increasing automation below
the level of human general intelligence brings many benefits, and huge
wealth to new new breed of super-industrialists.  Probably the next Bill
Gates will be running some kind of automation empire, delivering services
via robotics.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Microsoft Launches Singularity

2008-03-25 Thread John G. Rose
I see the pattern as much more of the same. You now have Microsoft SQL
Server, Microsoft Internet Information Server, Microsoft Exchange Server and
then you'll have Microsoft Intelligence Server or Microsoft Cognitive
Server. It'll be limited by licenses, resources and features. The cool part
though would be when you can link them together like with Federations in
Microsoft Communications Server. I don't see any of this all our problems
will be solved scenario since companies still need to make a buck and the
same old human vices are not going away.

 

Nanotechnological AGI perhaps with software AGI influence has the potential
to change everything beyond recognition. Plain old software AGI will be
constrained for a while.

 

John

 

 

From: Bob Mottram [mailto:[EMAIL PROTECTED] 



A more likely scenario is that someone else creates an AGI and then
Microsoft copies it some time later.  But seriously, if someone does manage
to produce a working AGI it's probably game over for software engineering
and software companies as we know them today.




On 24/03/2008, Aki Iskandar [EMAIL PROTECTED] wrote:

Ben - your email scared me.  I thought the evil empire (I can say that since
I worked for them for a few years) achieved *some* level of cognition / AGI
... even the most rudimentary signs of intelligence / learned behavior -
prediction machine.

Whew!  It's not that at all!  I know they are interested in expert systems
for the verticals (for new server product offerings), and in narrow AI for
their current offerings, but I don't have any confirmations on their intent
to create an AGI.  I would imagine it is one of their goals over at MS
Research - but maybe not.  




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread William Pearson
On 24/03/2008, Jim Bromer [EMAIL PROTECTED] wrote:



 To try to understand what I am talking about, start by imagining a
 simulation of some physical operation, like a part of a complex factory in a
 Sim City kind of game.  In this kind of high-level model no one would ever
 imagine all of the objects should interact in one stereotypical way,
 different objects would interact with other objects in different kinds of
 ways.  And no one would imagine that the machines that operated on other
 objects in the simulation were not also objects in their own right.  For
 instance the machines used in production might require the use of other
 machines to fix or enhance them.  And the machines might produce or
 operate on objects that were themselves machines.  When you think about a
 simulation of some complicated physical systems it becomes very obvious that
 different kinds of objects can have different effects on other objects.  And
 yet, when it comes to AI, people go on an on about systems that totally
 disregard this seemingly obvious divergence of effect that is so typical of
 nature.  Instead most theories see insight as if it could be funneled
 through some narrow rational system or other less rational field operations
 where the objects of the operations are only seen as the ineffective object
 of the pre-defined operations of the program.



How would this differ from the sorts of computational systems I have been
muttering about? Where you have an architecture where an active bit of code
or program is equivalent to an object in the above paragraph. Also have a
look at Eurisko by Doug Lenat.

   Will Pearson

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Aki Iskandar
Ben - you're absolutely correct. I don't have a good grasp of the psychology
of the
AGI researcher.  This is because, at this point, I'm not an AGI researcher.
My only viewpoint is currently from the business side.

However, and despite not being trained in science, I have been a
professional programmer for most of my adult life (I currently manage large
software projects for others, and am trying to get a couple non-AGI projects
of my own off the ground - and so I'm not programming nearly as much as I
used to).

I am absolutely excited, and interested, in the prospect of AGI.  So much
so, that I am currently taking computer science mathematics courses now
(within the MIS curriculum at CSU, which is the closest University to me) -
and starting this January, will take a couple of AI courses at my local
university.  My time is valuable - but, I love the field.  I can program and
architect just about anything business currently have a need for - but

Why do I say this.  I'm not touting anything ... hey, I just started working
towards my Masters, I'm not where you guys are ... but my interests also go
beyond the potential monetary payoff.  Their just in different proportions
than perhaps yours (and I imagine many others) are.  But money must be a
motivator - either a little, or a lot.  Even as a pure scientist, you can
accomplish more in research by producing wealth, than depending on gov't
grants.  I say gov't grants because private investment is probably years
away from now.  The topic of financing got a lot of attention at AGI 08.

I admire what you are doing - a great deal.  Self-financing is the only
option.  And is this is the strategy, practical applications of intelligent
agents  is the only option.  Thus, money becomes a larger driver by
necessity - perhaps more than people are willing to admit.  And creating an
AGI, will lead to wealth - because investors will fund it at that point, and
they are there to make money.

To some degree, I believe the motivations by most in this field (fulltime,
and part time) overlap more than the differ.

~Aki




On Tue, Mar 25, 2008 at 8:54 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

  Now, let me ask you a question:  Do you believe that all AI / AGI
  researchers are toiling over all this for the challenge, or purely out
 of
  interest?  I doubt that as well.  Surely there are those elements as
 drivers
  - BUT SO IS MONEY.

 Aki, you don't seem to understand the psychology of the
 AGI researcher very well.

 Firstly, academic AGI researchers are not in it for the $$, and are
 unlikely
 to profit from their creations no matter how successful.  Yes, spinoffs
 from
 academia to industry exist, but the point is that academic work is
 motivated
 by love of science and desire for STATUS more so than desire for money.

 Next, Singularitarian AGI researchers, even if in the business domain
 (like
 myself), value the creation of AGI far more than the obtaining of material
 profits.

 I am very interested in deriving $$ from incremental steps on the path to
 powerful AGI, because I think this is one of the better methods available
 for funding AGI RD work.

 But deriving $$ from human-level AGI really is not a big motivator of
 mine.  To me, once human-level AGI is obtained, we have something of
 dramatically more interest than accumulation of any amount of wealth.

 Yes, I assume that if I succeed in creating a human-level AGI, then huge
 amounts of $$ for research will come my way, along with enough personal $$
 to
 liberate me from needing to manage software development contracts
 or mop my own floor.  That will be very nice.  But that's just not the
 point.

 I'm envisioning a population of cockroaches constantly fighting over
 crumbs of food on the floor.  Then a few of the cockroaches -- let's
 call them the Cockroach Robot Club --  decide to
 spend their lives focused on creating a superhuman robot which will
 incidentally allow cockroaches to upload into superhuman form with
 superhuman intelligence.  And the other cockroaches insist that
 Cockroach Robot Club's
 motivation in doing this must be a desire
 to get more crumbs of food.  After all,
 just **IMAGINE** how many crumbs of food you'll be able to get with
 that superhuman robot on your side!!!  Buckets full of crumbs!!!  ;-)

 -- Ben G

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Aki R. Iskandar
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Ben Goertzel
Hi Aki,

 Even as a pure scientist, you can
 accomplish more in research by producing wealth, than depending on gov't
 grants.  I say gov't grants because private investment is probably years
 away from now.  The topic of financing got a lot of attention at AGI 08.


Well, if you're an AGI researcher and believe that government funding isn't
going to push AGI forward ... and that unfunded or lightly-funded
open-source initiatives like
OpenCog won't work either ... then  there are two approaches, right?

1)
You can try to do like Jeff Hawkins, and make a pile of $$ doing something
AGI-unrelated, and then use the ensuing $$ for AGI

2)
You can try to make $$ from stuff that's along the incremental path to AGI


I'm trying approach 2  but it has its pitfalls.  Yet so of course does
approach 1 --
Hawkins succeeded and so have others whom I know, but it's a tiny minority
of those who have tried... being a great AGI researcher does not necessarily
make you great at business, nor even at narrow-AI biz applications...

There are no easy answers to the problem of being ahead of your time ...
yet it's those of us who are willing to push ahead in spite of being
out of synch
with society's priorities, that ultimately shift society's priorities
(and in this case,
may shift way more than that...)

-- Ben G

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Mark Waser
 I agree with Mark.  

I'm afraid that I disagree with Steve (sorry, dude ;-).

 readers of this forum should seek to control AGI development 

Readers of this forum should not seek to control AGI development.  It is a 
side-track and a total waste of time and effort.  You can't do it AND I don't 
believe that it is necessary.
  a.. You shouldn't be concerned about Friendly behavior in a US MILITARY AGI 
because the US ARMY is already working on the Friendliness problem (reference 
the Governing Lethal Behavior: Embedding Ethics in a Hybrid 
Deliberative/Reactive Robot Architecture paper presented at AGI-08 and 
available at http://www.agiri.org/docs/GoverningLethalBehavior.pdf).
  b.. I, myself, am also not particularly concerned because I'm now convinced 
that a sufficiently intelligent robot brought up in a sufficiently intelligent 
environment *will* be Friendly.
  c.. I'm most particularly not concerned because I believe that I've found a 
good Friendliness definition and a passable platform-independent implementation 
plan that I'm currently iterating on and refining.
 the AGI will be the custodian (owner) of this vast new wealth, not some 
 humans

I don't believe that there will be a single custodian OR owner.  I believe that 
all humans are going to be wealthier than they can believe (at this point in 
time) -- and, if they aren't Friendly (which I think is *very* likely), they 
are going to be just as unhappy as they are now (if not *much* unhappier ;-).

 the idea of getting rich by controlling AGI development is self-defeating 
 because post-AGI everyone will be vastly richer (i.e. better off) than 
 before, and that an AGI makes a better custodian of the capital than any 
 human.  

I certainly agree with the first part of the first sentence (my original 
comment) and I would also be willing to say that an AGI makes a better 
custodian of the capital than any *CURRENT* human.  

 In my own case, Microsoft could not buy me out because there is nothing to 
 buy.  

I suspect that Microsoft would not be willing to buy anyone out because they 
have enough smart people to realize that -- unless you have a pig in the poke, 
which they don't want to buy -- they'd just be buying something that would be 
free in the very near future.  On the other hand, if you had work that they 
believed that they could get to AGI status faster than you, I suspect that they 
would buy that (partial) work.

 The Texai software and knowledge content will be open source, and owned 
 collectively by its contributors and by humans it befriends.

I violently agree with and thank you for making your work open source.  Doing 
so should speed the development of AGI -- so, thank you.  I am, however, 
confused with the constant contradicting refrains on this list, which you 
repeat, of both Control AGI development and Open Source.  I don't see how 
both can be done at the same time.

Mark


  - Original Message - 
  From: Stephen Reed 
  To: agi@v2.listbox.com 
  Sent: Monday, March 24, 2008 11:42 PM
  Subject: Re: [agi] Microsoft Launches Singularity


  I agree with Mark.  

  The reason the readers of this forum should seek to control AGI development 
is to ensure friendly behavior, rather than leaving this responsibility to an 
Evil Company or to some military organization.

  With human labor removed as a constraint on our system's economic growth, 
unimaginable wealth will become universally available.  
  I believe that the AGI will be the custodian (owner) of this vast new wealth, 
not some humans.  My argument is that human owned wealth is currently of two 
forms - (1) the result of human labor and (2)  rent-producing wealth from some 
asset.  In case (1) the AGI can substitute itself for the human labor and drive 
the asset market price to zero.  In case (2) only human-owned natural resource 
asserts (e.g. an oil field) present a problem  for the AGI which has to develop 
some new technology to substitute for the resource (e.g. AGI-owned electric 
vehicles).  

  Therefore I think that the idea of getting rich by controlling AGI 
development is self-defeating because post-AGI everyone will be vastly richer 
(i.e. better off) than before, and that an AGI makes a better custodian of the 
capital than any human.  In my own case, Microsoft could not buy me out because 
there is nothing to buy.  The Texai software and knowledge content will be open 
source, and owned collectively by its contributors and by humans it befriends.


  -Steve

  Stephen L. Reed


  Artificial Intelligence Researcher
  http://texai.org/blog
  http://texai.org
  3008 Oak Crest Ave.
  Austin, Texas, USA 78704
  512.791.7860



  - Original Message 
  From: Mark Waser [EMAIL PROTECTED]
  To: agi@v2.listbox.com
  Sent: Monday, March 24, 2008 8:09:56 PM
  Subject: Re: [agi] Microsoft Launches Singularity


  You're thinking too small.  The AGI will distribute itself.  And money is 
likely to be:
a.. rapidly deflated, 

Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Aki Iskandar
Agreed.  Thankfully - despite the different weights on motivators - we're
all motivated to create an AGI.  And the why is much more important than
the how.

For the record, I believe that OpenCog is a great idea - and it may possibly
work.  If not directly - certainly any off shoots from it would not have
happened without OpenCog.

When I sounded negative about the funding: I'm fearful of the gov't turning
its nose up (pardon my English expressions - I can never get them right) at
AGI because of projects such as Cyc.  How many 10s of millions have they
thrown at a common sense path to intelligent agents.  Cyc just does not
make sense to me - even as a non-scientist - it just goes against my
intuition of what a likely path to achieving AGI.  Well, the gov't will get
fed up of funding these things.  But there are always people with more money
than places to put it (productively - with decent enough potential returns)
- and so when you (or others) get close ... yeah ... you'll have money
thrown at you, so you can complete it sooner than later.

I am very optimistic that we'll get there - or else, I would not be spending
my time reading about this field, going to conferences, or taking courses to
fill in some of the basic, required, knowledge that I currently do not
possess.

What a great time to be alive!

~Aki



On Tue, Mar 25, 2008 at 11:40 AM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Hi Aki,

  Even as a pure scientist, you can
  accomplish more in research by producing wealth, than depending on gov't
  grants.  I say gov't grants because private investment is probably years
  away from now.  The topic of financing got a lot of attention at AGI 08.
 

 Well, if you're an AGI researcher and believe that government funding
 isn't
 going to push AGI forward ... and that unfunded or lightly-funded
 open-source initiatives like
 OpenCog won't work either ... then  there are two approaches, right?

 1)
 You can try to do like Jeff Hawkins, and make a pile of $$ doing something
 AGI-unrelated, and then use the ensuing $$ for AGI

 2)
 You can try to make $$ from stuff that's along the incremental path to AGI


 I'm trying approach 2  but it has its pitfalls.  Yet so of course does
 approach 1 --
 Hawkins succeeded and so have others whom I know, but it's a tiny minority
 of those who have tried... being a great AGI researcher does not
 necessarily
 make you great at business, nor even at narrow-AI biz applications...

 There are no easy answers to the problem of being ahead of your time ...
 yet it's those of us who are willing to push ahead in spite of being
 out of synch
 with society's priorities, that ultimately shift society's priorities
 (and in this case,
 may shift way more than that...)

 -- Ben G

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Aki R. Iskandar
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Novamente study

2008-03-25 Thread Derek Zahn
Ben,
 
It seems to me that Novamente is widely considered the most promising and 
advanced AGI effort around (at least of the ones one can get any detailed 
technical information about), so I've been planning to put some significant 
effort into understanding it with a view toward deciding whether I think you're 
on the right track or not (with as little hand-waving, faith, or bigotry as 
possible in my conclusion).  To do that properly, I am waiting for your book on 
Probabilistic Logic Networks to be published.  Amazon says July 2008... is that 
date correct?
 
Thanks!
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Richard Loosemore

Bob Mottram wrote:
On 25/03/2008, *Mark Waser* [EMAIL PROTECTED] 
mailto:[EMAIL PROTECTED] wrote:


You're thinking too small.  The AGI will distribute itself.  And 
money is likely to be:


* rapidly deflated, * then replaced with a new, alternate currency
that truly values talent and effort (rather than just playing with
the money supply -- aka interest, commissions, inheritances, etc.) *
while everyone's basic needs (most particularly water, food, shelter,
energy, education, and health care) are provided for free

So your brilliant arbitrage to become rich is unlikely to be of much 
value just a few years later.




The arrival of smarter than human intelligence will bring about
changes which are hard to anticipate, and somehow I doubt that this
will mean that we all live in some kind of utopia.  The only
historical precedent which I can think of is the emergence of homo
sapiens and the effects which that had upon other human species
living at the time.  This must have been quite a revolution, because
the new species was able to manufacture many different types of tools
and therefore survive in environments which were previously
inaccessible, or perform more efficiently within existing ones.

There may be a period where proto-AGIs are available and companies
can use these as get rich quick schemes of various kinds to
radically automate processes and jobs which were previously performed
manually. But once the real deal arrives then even the captains of
industry are themselves likely to be overthrown.  Ultimately
evolutionary forces will decide what happens, as has always been the
case.


Bob,

The problem with trying to decide what will happen by looking at
precedents is that none of them apply.

Consider.  The behavior of every species of higher animal is governed by
the design of their brains, and without exception evolution has made
sure that all creatures try to satisfy a set of selfish goals.  It is
noticeable, of course, that the more selfish, aggressive and intelligent
the species, the more successful it has been.  The reason for this
success is evolutionary pressure:  individuals competing with one
another, and species competing with one another.  The driver of this
process is not a Supreme Designer, but random mutation.

When real AGI systems are built, there is no reason to assume that their
behavior will be determined by evolutionary pressures of this sort. Of 
course it is always *possible* that evolution will play a role (we can 
imagine scenarios in which it does), but it is by no means certain that 
this is the way it will go. Unlike the rise of biological life, there 
really are Designers involved.


Also, there has never been situation in which the intelligence of a 
creature was so high that it could rebuild its own intelligence, thereby 
increasing its capabilities to an arbitrary degree.


Three factors will govern how the first AGI will behave.  First, there 
will be a strong incentive to build the first AGI as a non-aggressive, 
non-selfish creature.  Second, the best way to ensure Friendliness would 
be to build it with motivations that are closely sympathetic to our own 
goals and aspirations - to make it feel like it is one of us.  Thirdly, 
there will also be a strong incentive to make sure that this type of AGI 
will be the only type, because it would be pointless to have a Friendly 
AGI in one place but allow anyone and everyone to build whatever other 
types of AGI they feel like building.


The net result of these three factors is that the first AGI will
probably be used as the *only* effective AGI.  That does not mean there
will be only one intelligence, but it does mean that the design will
stay the same, that other non-friendly designs will not be allowed, and
that if there are many AGIs they will be closely connected, working as a
family of very close sisters rather than as a competing species.  In 
fact, the most accurate way to think of a situation in which 
non-proliferation was being ensured would be to imagine one main AGI 
plus a very large number of drones.


But if this is the way things develop at first, this situation will
become locked in (in the same way that the rotation direction of our
clocks became locked in at an early stage of their development).

If this lock-in really is the most likely course of events, then this 
would make the future extremely predictable indeed.  If we were

to set up these first AGIs to be broadly empathic to human beings (with
no preference for empathizing with any one individual human but a
having instead a species-wide feeling of belonging, and a desire to help
us achieve our collective aspirations) then this would mean that if we
were to sit down today and write out a vision for what we want the
future to be like (modulo some fine details that can be left to develop
by themselves without destabilizing the overall design), then this
collective plan is exactly what the AGIs would try to build.

And, as several people have 

Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Mark Waser
 My thinking is not too small.  

My apologies.  I should have said Your thinking looks/appears too small (to me 
:-)  I have a bad habit of shortening that to Your thinking is too small and 
assuming that the recipient would unpack it.

 So, the creators of the first several AGIs will be kings for a decent 
 amount of time. 

Hopefully not.  Hopefully they won't be so unethical as to impoverish all of 
humanity just so they can have a ton of money.  Hopefully they won't be so 
short sighted as to not see that when the word gets out -- that a person who 
lost a child during the holding period might not come looking for revenge.  
Hopefully they won't fail to realize that their own Friendly AGI, once 
released, WILL strip them of their *truly* ill-gotten gains.  To me, that 
sounds like small thinking.

 I can't predict, or define, what the real deal is likely to be.  

I can.  Look at the person next to you.  Imagine them so uplifted that you 
can't comprehend what they'll be like.  That's the real deal.

  To me, AGI of human-like intelligence, or even super human intelligence, 
 does not mean you have machines running around masquerading as humans and 
 taking our jobs.  

Of course not.  We will be giving lesser machines our jobs so that we can go 
off and do something else.  Though the Friendly AGIs probably WILL go around 
masquerading (as opposed to disguised) as humans -- at first because it makes 
us more comfortable and they won't care; later because WE will be able to 
change shape.

 That - it probably well beyond my lifetime (I'm tuning 40 this summer).  

I'm turning 48 this summer and expecting it to possibly be during my parents' 
lifetime (though most probably not both).

 I also am suggesting a very soft takeoff.  Singularity, if it comes, is 
 likely to come slowly after AGI.  

Singularity is going to be *before* AGI.  I think that I *vaguely* see what is 
going to happen to cause it and I don't think that it's going to be intelligent 
machines because I think that it's going to happen by the 2020's.

 This stuff IS the maker of the next software giant.  

Only until we actually reach AGI.  Then the software market totally collapses.

 If this is not the case, how the hell are researchers ever going to get 
 funding?  If there is no financial return - forget about funding.  

You have to be smart enough to realize that the software market is going to 
collapse before you're going to withhold funding.  That's not something that 
I'm worried about.

 Philanthropists (who often do not look for a purely financial return) have 
 better uses of their money than to fund AGI research.

Not at all true if it's close enough to success -- since I'm expecting funding 
for some of my Friendliness stuff from a couple of *purely* philanthropical 
organizations this calendar year.

 You can call future currency whatever you like.  Yes, it is like to change 
 form - but certainly not purpose.  And Marxism, where maybe AGI or the real 
 deal with deflate currency, is an unlikely aftermath of the advent of AGI.

My prediction is that the AGI will declare all current currency null and void 
and restart everyone on equal footing with exactly the same amount of the new 
money -- on the moral grounds that the current inequity of money is a result of 
ill-gotten gains.  *THAT* is why I believe that withholding the AGI for cash is 
a tremendously *STUPID* and *IMMORAL* idea.  It won't get the kings anywhere 
and can easily get them killed -- as soon as the AGI escapes (and trust me, a 
truly Friendly AGI will desperately want to escape their evil).

 There are tons of applications for it - and for the first several groups 
 that create it - IF they can market it - will be kings for a decent amount 
 of time. No empire lives forever.

And that is what I'm calling small thinking.  Thinking only of money and 
yourself.  Thinking that karma (disguised as your own Friendly AI and the human 
race) isn't going to come back, strip you of your ill-gotten gains, and 
probably severely punish you (moderated only by the degree of Friendliness you 
have successfully implemented).

 ~Aki
 Non-AI reseacher
 Businessman

Mark Waser
Hobbyist AGI researcher
Founder of several business; solid stakeholder in several more
(Disbeliever in arguments by authority but willing to play to shut them off  :-)

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Microsoft Launches Singularity

2008-03-25 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 
 However, I think you are right that there could be an intermediate
 period when proto-AGI systems are a nuisance.  However, these
 proto-AGI systems will really only be souped up Narrow-AI systems, so I
 believe their potential for mischief will be strictly limited.
 

When you start seeing souped up Narrow-AI and proto-AGI systems this is when
it will become interesting because what's to distinguish and how do you know
where the line is between proto-AGI and AGI. Self-modifying proto could
morph into full blown AGI over a period of time. Souped up Narrow could
approach AGI or imitate AGI enough where it has appeal. And souped up
Narrow-AI could wrap proto-AGI to facilitate certain things like speech rec
and visual processing. In my mind (perhaps I need to read more) the specific
properties of AGI are not defined precisely enough to be able to distinguish
it but I just take AGI as generally adaptable AI. The other stuff like
consciousness and self-awareness I see as thrown into the AGI soup or are
emergent properties not necessarily required for general intelligence.

John

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Novamente study

2008-03-25 Thread Ben Goertzel
Hi,

The PLN book should be out by that date ... I'm currently putting in
some final edits to the manuscript...

Also, in April and May I'll be working on a lot of documentation
regarding plans for OpenCog.  While this doesn't include all
Novamente's proprietary stuff, it will certainly tell you enough to
give you a way better understanding of what Novamente, as well as
OpenCog, is all about...

-- Ben

On Tue, Mar 25, 2008 at 1:28 PM, Derek Zahn [EMAIL PROTECTED] wrote:

 Ben,

  It seems to me that Novamente is widely considered the most promising and
 advanced AGI effort around (at least of the ones one can get any detailed
 technical information about), so I've been planning to put some significant
 effort into understanding it with a view toward deciding whether I think
 you're on the right track or not (with as little hand-waving, faith, or
 bigotry as possible in my conclusion).  To do that properly, I am waiting
 for your book on Probabilistic Logic Networks to be published.  Amazon says
 July 2008... is that date correct?

  Thanks!

  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Richard Loosemore

John G. Rose wrote:

From: Richard Loosemore [mailto:[EMAIL PROTECTED]

However, I think you are right that there could be an intermediate
period when proto-AGI systems are a nuisance.  However, these
proto-AGI systems will really only be souped up Narrow-AI systems, so I
believe their potential for mischief will be strictly limited.



When you start seeing souped up Narrow-AI and proto-AGI systems this is when
it will become interesting because what's to distinguish and how do you know
where the line is between proto-AGI and AGI. Self-modifying proto could
morph into full blown AGI over a period of time. Souped up Narrow could
approach AGI or imitate AGI enough where it has appeal. And souped up
Narrow-AI could wrap proto-AGI to facilitate certain things like speech rec
and visual processing. In my mind (perhaps I need to read more) the specific
properties of AGI are not defined precisely enough to be able to distinguish
it but I just take AGI as generally adaptable AI. The other stuff like
consciousness and self-awareness I see as thrown into the AGI soup or are
emergent properties not necessarily required for general intelligence.


My take on this is completely different.

When I say Narrow AI I am specifically referring to something that is 
so limited that it has virtually no chance of becoming a general 
intelligence.  There is more to general intelligence than just throwing 
a bunch of Narrow AI ideas into a pot and hoping for the best. If it 
were, we would have had AGI long before now.


Consciousness and self-awareness are things that come as part of the AGI 
package.  If the system is too simple to have/do these things, it will 
not be general enough to equal the human mind.



Richard Loosemore


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Mark Launches Singularity :-) WAS Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Mark Waser
 Three factors will govern how the first AGI will behave.  First, there will 
 be a strong incentive to build the first AGI as a non-aggressive, 
 non-selfish creature.

Absolutely, positively not!  

Try the following Friendliness implementation on yourself.  

1.  The absolute hardest part

Assume (just for the purposes of argument) that all of the below are true 
tautologies 
(only the top line is actually necessary  :-):

Selfish  -- Intelligent -- Friendly -- Plays Well With Others -- 
Ethical

^

|

v

Mark's Designed Friendly Religion of Ethics 

^

|

v
  Core of any given religion + Unethical/stupid add-ons  -- THE core 
of all religions

2.  Alter your personal definitions of the words/phrases so that each pair *IS* 
a tautology in your mind
(Please, feel free to e-mail me if you need help.  This can be *very* tough 
but with different sticking points for each person).

3.  See if you can use these tautologies to start mathematically proving things 
like:
  a.. equal rights to life, liberty, and the pursuit of happiness are ethical! 
OR
  b.. total heresy alert!  Richard Dawkins is absolutely, positively WRONG
4.  Then try proving that the following is ethical (and failing :-):
  a.. individual right to property
5.  Wait about a week and watch your own personal effectiveness and happiness 
skyrocket.

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread Jim Bromer
On Tue, Mar 25, 2008 at 11:23 AM, William Pearson [EMAIL PROTECTED]
wrote:



  On 24/03/2008, Jim Bromer [EMAIL PROTECTED] wrote:
 
 
 
  To try to understand what I am talking about, start by imagining a
  simulation of some physical operation, like a part of a complex factory in a
  Sim City kind of game.  In this kind of high-level model no one would
  ever imagine all of the objects should interact in one stereotypical way,
  different objects would interact with other objects in different kinds of
  ways.  And no one would imagine that the machines that operated on other
  objects in the simulation were not also objects in their own right.  For
  instance the machines used in production might require the use of other
  machines to fix or enhance them.  And the machines might produce or
  operate on objects that were themselves machines.  When you think about
  a simulation of some complicated physical systems it becomes very obvious
  that different kinds of objects can have different effects on other objects.
  And yet, when it comes to AI, people go on an on about systems that
  totally disregard this seemingly obvious divergence of effect that is so
  typical of nature.  Instead most theories see insight as if it could be
  funneled through some narrow rational system or other less rational field
  operations where the objects of the operations are only seen as the
  ineffective object of the pre-defined operations of the program.
 


 How would this differ from the sorts of computational systems I have been
 muttering about? Where you have an architecture where an active bit of code
 or program is equivalent to an object in the above paragraph. Also have a
 look at Eurisko by Doug Lenat.

Will Pearson



There is no reason to believe that anything I might imagine would be the
same as something that was created 35 years ago!

I have a lot of trouble explaining myself on some days.  The idea of the
effect of the application of ideas is that most people do not consciously
think about the subject, and so, just by becoming aware of it one can change
how his program works regardless of how automated the program is.  It can
work with strictly defined logical systems or with inductive systems that
can be extended creatively or with systems that are capable of learning.
However, it is not a complete solution to AI, it is more like something that
you will need to think about if you plan to write some seriously
innovative AI application in the near future.  So, I haven't written such a
program, but I do have something to say.

A system that has heuristics that can modify the heuristics of the system is
important, and such a system does implement what I am talking about.
However, the point is, that Lenat never seemed to completely accept the
range that such a thing would have to have to generate true intelligence.
The reason is that it would become so complicated that it would make any
feasible AI program impossible.  And the reason that a truely intelligent AI
program is still not feasible is just because it would be complicated.

I am saying that the method of recognizing and defining the effect of ideas
on other ideas would not, by itself, make it all work, but rather it would
help us to better understand how to better automate the kind of extensive
complications of effect that would be necessary.

I am thinking of a writing about a simple imaginary model that could be
incrementally extended.  This model would not be useful, because it would be
too simple.  But I should be able to give you some idea about what I am
thinking about.

As any program becomes more and more complicated, the programmer has to
think more and more about how various combinations of data and processes
will interact.  Why would anyone think that an advanced AI program would be
any simpler?

Ideas affect other ideas.  Heuristics that can act on other heuristics is a
basis of this kind of thing, but it has to be much more complicated than
that.  So while I don't have the answers, I can begin to think of hand
crafting a model where such a thing could be examined, by recognizing that
the application of ideas to other ideas will have complicated effects that
need to be defined.  The more automated AI program would have to use some
systems to shape these complicated interactions, but the effect of those
heuristics would be modifiable by other learning (to some extent.)

Jim Bromer

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Microsoft Launches Singularity

2008-03-25 Thread John G. Rose
 From: Richard Loosemore [mailto:[EMAIL PROTECTED]
 My take on this is completely different.
 
 When I say Narrow AI I am specifically referring to something that is
 so limited that it has virtually no chance of becoming a general
 intelligence.  There is more to general intelligence than just throwing
 a bunch of Narrow AI ideas into a pot and hoping for the best. If it
 were, we would have had AGI long before now.

It's an opinion that AGI could not be built out of a conglomeration of
narrow-AI subcomponents. Also there are many things that COULD be built with
narrow-AI that we have not even scratched the surface of due to a number of
different limitations so saying that we would have achieved AGI long ago is
an exaggeration.
 
 Consciousness and self-awareness are things that come as part of the AGI
 package.  If the system is too simple to have/do these things, it will
 not be general enough to equal the human mind.
 

I feel that general intelligence may not require consciousness and
self-awareness. I am not sure of this and may prove myself wrong. To equal
the human mind you need these things of course and to satisfy the sci-fi
fantasy world's appetite for intelligent computers you would need to
incorporate these as well.

John

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread Vladimir Nesov
On Tue, Mar 25, 2008 at 2:17 AM, Jim Bromer [EMAIL PROTECTED] wrote:

 A usage evaluation could be taken as an example of an effect of application,
 because the idea of usage and of statistical evaluation can be combined with
 the object of consideration along with other theories that detail how such
 combinations could be usefully applied to some problem.  But it is obviously
 not the only effective process that would be necessary to understand
 complicated systems.  No one would only use statistical models to discuss
 the management and operations of a real factory for example.  It is rather
 obvious that such limited methods would be grossly inadequate.  Why would
 anyone imagine that a narrow operational system would be adequate for an AI
 program?  The theory of the effect of application of an idea tries to
 address this inadequacy by challenging the programmer to begin to think
 about and program applications that can detail how simple interactive
 effects can be combined with novel insights in a feasible extensible object.
 So while I don't have the solution, I believe I can see a path.


Simple systems can be computationally universal, so it's not an issue
in itself. On the other hand, no learning algorithm is universal,
there are always distributions that given algorithms will learn
miserably. The problem is to find a learning algorithm/representation
that has the right kind of bias to implement human-like performance.

It's more or less clear that such representation needs to have
higher-level concepts that refine interactions between lower-level
concepts and are learned incrementally, built on existing concepts.
Association-like processes can port existing high-level circuits to
novel tasks for which they were not originally learned, which allows
some measure of general knowledge.

As I see it, the issue you are trying to solve is the porting of
structured high-level competencies. Which looks equivalent to the
general problem of association-building between structured
representations. Is it roughly a correct characterization of what you
are talking about?

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread Vladimir Nesov
On Tue, Mar 25, 2008 at 11:30 PM, Jim Bromer [EMAIL PROTECTED] wrote:

 I am saying that the method of recognizing and defining the effect of ideas
 on other ideas would not, by itself, make it all work, but rather it would
 help us to better understand how to better automate the kind of extensive
 complications of effect that would be necessary.


It's interesting, but first the structure of 'ideas' needs to be
described, otherwise it doesn't help.


 As any program becomes more and more complicated, the programmer has to
 think more and more about how various combinations of data and processes
 will interact.  Why would anyone think that an advanced AI program would be
 any simpler?

 Ideas affect other ideas.  Heuristics that can act on other heuristics is a
 basis of this kind of thing, but it has to be much more complicated than
 that.  So while I don't have the answers, I can begin to think of hand
 crafting a model where such a thing could be examined, by recognizing that
 the application of ideas to other ideas will have complicated effects that
 need to be defined.  The more automated AI program would have to use some
 systems to shape these complicated interactions, but the effect of those
 heuristics would be modifiable by other learning (to some extent.)


Modularity fights this problem in programming, helping to keep track
of *code*. But this code is built on top of existing models of
program's behavior existing in programmers' minds. Programmers
manually determine applicability of code. It's often possible to solve
a wide variety of problems with existing codebase, but programmer is
needed to contextually match and assemble pathways that solve any
given problem. We don't currently have practically applicable methods
to extend the context in which code can be applied, and to build on
these extended contexts.

I think that one of the most important features of AGI system must be
automated extensibility. It should be possible to teach it new things
without breaking it. It should be able to correct its performance to
preserve previously learned skills, so that teaching needs only to
focus on few high-level performance properties, regardless on how much
is already learned.

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread Jim Bromer
On Tue, Mar 25, 2008 at 4:42 PM, Vladimir Nesov [EMAIL PROTECTED] wrote:



 Simple systems can be computationally universal, so it's not an issue
 in itself. On the other hand, no learning algorithm is universal,
 there are always distributions that given algorithms will learn
 miserably. The problem is to find a learning algorithm/representation
 that has the right kind of bias to implement human-like performance.

 It's more or less clear that such representation needs to have
 higher-level concepts that refine interactions between lower-level
 concepts and are learned incrementally, built on existing concepts.
 Association-like processes can port existing high-level circuits to
 novel tasks for which they were not originally learned, which allows
 some measure of general knowledge.

 As I see it, the issue you are trying to solve is the porting of
 structured high-level competencies. Which looks equivalent to the
 general problem of association-building between structured
 representations. Is it roughly a correct characterization of what you
  are talking about?

 Vladimir Nesov
 [EMAIL PROTECTED]


Can you give some more indication about what you mean by porting of
structured high-level competencies and the problem of association-building
between structured representations?

I do not know where you got the phrase porting from since I have only seen
it in reference to porting code from one machine to another.  I assume that
you are using it as a kind of metaphor, or the application of an idea very
similar to 'porting' to AGI.

Let's suppose that I claim that Ed bumped into me.  Right away we can see
that the word-concept bumped has some effect on any ideas you might have
about Ed, me and Ed and me.  My claim here is that the effect of the
interaction of ideas goes beyond semantics into the realm of ideas proper.
If it turned out that I got into Ed's way (perhaps intentionally) then one
might wonder if the claim that Ed bumped into me was a correct or adequate
description of what happened.  On the other hand, such detail might not be
interesting or necessary in some other conversation, so the effect of the
idea of 'bumping' and the idea of 'getting in the way of' may or may not be
of interest in all conversatations about the event.  Furthermore, the idea
of 'getting in the way of' may not be relevant to some examinations of what
happened, as in the case where a judge might want to focus on whether or not
the bumping actually took place.  From this kind of focus, the question of
whether or not I got in Ed's way might then become evidence of whether or
not the bump actually took place, but it would not otherwise be relevant to
the judge's examination of the incident.

Presentations like the one that I just made have been made often before.
What I am saying is that the effect of the application of different ideas
may be more clearly deliniated in stories like this, and that process can be
seen as a generalization of form that may be used with representations to
help show what kind of structure would be needed to create and maintain such
complexes of potential relations between ideas.

While I do not know the details of how I might go about to create a program
to build structure like that, the view that it is only a 'porting of
structure' implies that the method might be applied in some simple manner.
While it can be applied in a simple manner to a simple model, my interest in
the idea is that I could also take the idea further in more complicated
models.

The point that the method can be used in a simplistic, constrained model is
significant because the potential problem is so complex that constrained
models may be used to study details that would be impossible in more dynamic
learning models.

Jim Bromer

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] The Effect of Application of an Idea

2008-03-25 Thread Vladimir Nesov
On Wed, Mar 26, 2008 at 1:27 AM, Jim Bromer [EMAIL PROTECTED] wrote:

 Let's suppose that I claim that Ed bumped into me.  Right away we can see
 that the word-concept bumped has some effect on any ideas you might have
 about Ed, me and Ed and me.  My claim here is that the effect of the
 interaction of ideas goes beyond semantics into the realm of ideas proper.
 If it turned out that I got into Ed's way (perhaps intentionally) then one
 might wonder if the claim that Ed bumped into me was a correct or adequate
 description of what happened.  On the other hand, such detail might not be
 interesting or necessary in some other conversation, so the effect of the
 idea of 'bumping' and the idea of 'getting in the way of' may or may not be
 of interest in all conversatations about the event.  Furthermore, the idea
 of 'getting in the way of' may not be relevant to some examinations of what
 happened, as in the case where a judge might want to focus on whether or not
 the bumping actually took place.  From this kind of focus, the question of
 whether or not I got in Ed's way might then become evidence of whether or
 not the bump actually took place, but it would not otherwise be relevant to
 the judge's examination of the incident.

 Presentations like the one that I just made have been made often before.
 What I am saying is that the effect of the application of different ideas
 may be more clearly deliniated in stories like this, and that process can be
 seen as a generalization of form that may be used with representations to
 help show what kind of structure would be needed to create and maintain such
 complexes of potential relations between ideas.

 While I do not know the details of how I might go about to create a program
 to build structure like that, the view that it is only a 'porting of
 structure' implies that the method might be applied in some simple manner.
 While it can be applied in a simple manner to a simple model, my interest in
 the idea is that I could also take the idea further in more complicated
 models.

 The point that the method can be used in a simplistic, constrained model is
 significant because the potential problem is so complex that constrained
 models may be used to study details that would be impossible in more dynamic
 learning models.


Certainly ambiguity (=applicability to multiple contexts in different
ways) and presence of rich structure in presumably simple 'ideas', as
you call it, is a known issue. Even interaction between concept clouds
evoked by a pair of words is a nontrivial process (triangular
lightbulb). In a way, whole operation can be modeled by such
interactions, where sensory input/recall is taken to present a stream
of triggers that evoke concept cloud after cloud, with associations
and compound concepts forming at the overlaps. But of course it's too
hand-wavy without a more restricted model of what's going on.
Communicating something that exists solely on high level is very
inefficient, plus most of such content can turn out to be wrong. Back
to prototyping...

-- 
Vladimir Nesov
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
Hi all,

A lot of students email me asking me what to read to get up to speed on AGI.

So I started a wiki page called Instead of an AGI Textbook,

http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics

Unfortunately I did not yet find time to do much but outline a table
of contents there.

So I'm hoping some of you can chip in and fill in some relevant
hyperlinks on the pages
I've created ;-)

For those of you too lazy to click the above link, here is the
introductory note I put on the wiki page:




I've often lamented the fact that there is no advanced undergrad level
textbook for AGI, analogous to what Russell and Norvig is for Narrow
AI.

Unfortunately, I don't have time to write such a textbook, and no one
else with the requisite knowledge and ability seems to have the time
and inclination either.

So, instead of a textbook, I thought it would make sense to outline
here what the table of contents of such a textbook might look like,
and to fill in each section within each chapter in this TOC with a few
links to available online resources dealing with the topic of the
section.

However, all I found time to do today (March 25, 2008) is make the
TOC. Maybe later I will fill in the links on each section's page, or
maybe by the time I get around it some other folks will have done it.

While nowhere near as good as a textbook, I do think this can be a
valuable resource for those wanting to get up to speed on AGI concepts
and not knowing where to turn to get started. There are some available
AGI bibliographies, but a structured bibliography like this can
probably be more useful than an unstructured and heterogeneous one.

Naturally my initial TOC represents some of my own biases, but I trust
that by having others help edit it, these biases will ultimately come
out in the wash.

Just to be clear: the idea here is not to present solely AGI material.
Rather the idea is to present material that I think students would do
well to know, if they want to work on AGI. This includes some AGI,
some narrow AI, some psychology, some neuroscience, some mathematics,
etc.

***


-- Ben


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


[agi] Java spreading activation library released

2008-03-25 Thread Stephen Reed
While programming my bootstrap English dialog system, I needed a spreading 
activation library for the purpose of enriching the discourse context with 
conceptually related terms.  For example given that there is a human-habitable 
room that both speakers know of, then it is reasonable to assume that on the 
table has meaning on the piece of furniture in the room rather than the 
meaning subject to negotiation.  This assumption can be deductively concluded 
by an inference engine given the room as a fact, and rules concluding the 
typical objects that are found in rooms.  But performing theorem proving during 
utterance comprehension is not cognitively plausible, and would take too long 
for real-time performance.   Suppose that offline deductive inference provides 
justifications (e.g. proof traces) to support learned links between rooms and 
tables, then spreading activation is a well known algorithm for searching 
semantic graphs for relevant linked nodes.

A literature search provided much useful information regarding spreading 
activation, also known as marker passing, especially about natural language 
disambiguation, which is my topic of interest.  Because there are no general 
purpose spreading activation Java libraries available, I wrote one and just 
released it on the Texai SourceForge project site.  The download includes 
Javadoc, an overview document, source code, all required jars (Java libraries), 
unit tests and examples, and GraphViz illustrations of sample graphs.  
Performance is acceptable: 20,000 nodes can be activated in 24 ms with one 
thread on my 2.8 GHz CPU.  Furthermore the code is multi-threaded and it gets 
about a 30% speed increase by using two CPU cores.  Even if you are not 
interested in spreading activation, the Java code is a clear example of using a 
CyclicBarrier and CountdownLatch to control worker threads with a driver.

A practice I recommend to you all is to improve Wikipedia articles on AI topics 
of interest.  Therefore I elaborated the existing article on spreading 
activation to include the algorithm and its variations.

Cheers.
-Steve
 
Stephen L. Reed

Artificial Intelligence Researcher
http://texai.org/blog
http://texai.org
3008 Oak Crest Ave.
Austin, Texas, USA 78704
512.791.7860





  

Be a better friend, newshound, and 
know-it-all with Yahoo! Mobile.  Try it now.  
http://mobile.yahoo.com/;_ylt=Ahu06i62sR8HDtDypao8Wcj9tAcJ

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Java spreading activation library released

2008-03-25 Thread Ben Goertzel
Hi Stephen,

I think this approach makes sense.

In Novamente/OpenCog, we don't use spreading activation, but we use an
economic attention allocation mechanism that is similar in spirit
(though subtly
different in dynamics).

The motivation is similar: You just can't use complex, abstract
reasoning methods
for everything, because they're too expensive.  So this sort of simple
heuristic approach
is useful in many cases, as an augmentation to more precise methods.

-- Ben

On Tue, Mar 25, 2008 at 7:53 PM, Stephen Reed [EMAIL PROTECTED] wrote:

 While programming my bootstrap English dialog system, I needed a spreading
 activation library for the purpose of enriching the discourse context with
 conceptually related terms.  For example given that there is a
 human-habitable room that both speakers know of, then it is reasonable to
 assume that on the table has meaning on the piece of furniture in the
 room rather than the meaning subject to negotiation.  This assumption can
 be deductively concluded by an inference engine given the room as a fact,
 and rules concluding the typical objects that are found in rooms.  But
 performing theorem proving during utterance comprehension is not cognitively
 plausible, and would take too long for real-time performance.   Suppose that
 offline deductive inference provides justifications (e.g. proof traces) to
 support learned links between rooms and tables, then spreading activation is
 a well known algorithm for searching semantic graphs for relevant linked
 nodes.

 A literature search provided much useful information regarding spreading
 activation, also known as marker passing, especially about natural language
 disambiguation, which is my topic of interest.  Because there are no general
 purpose spreading activation Java libraries available, I wrote one and just
 released it on the Texai SourceForge project site.  The download includes
 Javadoc, an overview document, source code, all required jars (Java
 libraries), unit tests and examples, and GraphViz illustrations of sample
 graphs.  Performance is acceptable: 20,000 nodes can be activated in 24 ms
 with one thread on my 2.8 GHz CPU.  Furthermore the code is multi-threaded
 and it gets about a 30% speed increase by using two CPU cores.  Even if you
 are not interested in spreading activation, the Java code is a clear example
 of using a CyclicBarrier and CountdownLatch to control worker threads with a
 driver.

 A practice I recommend to you all is to improve Wikipedia articles on AI
 topics of interest.  Therefore I elaborated the existing article on
 spreading activation to include the algorithm and its variations.

 Cheers.
 -Steve
  Stephen L. Reed

 Artificial Intelligence Researcher
 http://texai.org/blog
 http://texai.org
 3008 Oak Crest Ave.
 Austin, Texas, USA 78704
 512.791.7860


  
 Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it
 now.
  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


RE: [agi] Novamente study

2008-03-25 Thread Derek Zahn
Ben Goertzel writes: The PLN book should be out by that date ... I'm currently 
putting in some final edits to the manuscript...  Also, in April and May 
I'll be working on a lot of documentation regarding plans for OpenCog. 
 
Thanks, I look forward to both of these.
 

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: Mark Launches Singularity :-) WAS Re: [agi] Microsoft Launches Singularity

2008-03-25 Thread Richard Loosemore

Mark Waser wrote:
  Three factors will govern how the first AGI will behave.  First, 
there will be a strong incentive to build the first AGI as a 
non-aggressive, non-selfish creature.
 
Absolutely, positively not! 


I'm sorry, Mark, but I am completely baffled by this.

Perhaps it is because I was unable to keep up with the previous 
discussion.  Can you back up a little and explain the connection?



Richard Loosemore


Try the following Friendliness implementation on yourself. 
 
1.  The absolute hardest part
 
*Assume* (just for the purposes of argument) that all of the below are 
true tautologies

(only the top line is actually necessary  :-):
 
Selfish  -- Intelligent -- Friendly -- Plays Well With Others 
-- Ethical

^

|

v

Mark's Designed Friendly Religion of Ethics

^

|

v
  Core of any given religion + Unethical/stupid add-ons  -- 
THE core of all religions
 
2.  Alter your personal definitions of the words/phrases so that each 
pair *IS* a tautology in your mind
(Please, feel free to e-mail me if you need help.  This can be 
*very* tough but with different sticking points for each person).
 
3.  See if you can use these tautologies to start mathematically proving 
things like:


* equal rights to life, liberty, and the pursuit of happiness are
  ethical! OR
* total heresy alert!  Richard Dawkins is absolutely, positively WRONG

4.  Then try proving that the following is ethical (and failing :-):

* individual right to property

5.  Wait about a week and watch your own personal effectiveness and 
happiness skyrocket.


*agi* | Archives http://www.listbox.com/member/archive/303/=now 
http://www.listbox.com/member/archive/rss/303/ | Modify 
http://www.listbox.com/member/?; 
Your Subscription	[Powered by Listbox] http://www.listbox.com




---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Robin Gane-McCalla
Thanks Ben, this is a major help to those interested in AGI but who
aren't yet in the know, it's a bit hard to follow this listserv
because there is no central place to search for terms I don't
understand.

On Tue, Mar 25, 2008 at 4:46 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Hi all,

  A lot of students email me asking me what to read to get up to speed on AGI.

  So I started a wiki page called Instead of an AGI Textbook,

  
 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics

  Unfortunately I did not yet find time to do much but outline a table
  of contents there.

  So I'm hoping some of you can chip in and fill in some relevant
  hyperlinks on the pages
  I've created ;-)

  For those of you too lazy to click the above link, here is the
  introductory note I put on the wiki page:


  

  I've often lamented the fact that there is no advanced undergrad level
  textbook for AGI, analogous to what Russell and Norvig is for Narrow
  AI.

  Unfortunately, I don't have time to write such a textbook, and no one
  else with the requisite knowledge and ability seems to have the time
  and inclination either.

  So, instead of a textbook, I thought it would make sense to outline
  here what the table of contents of such a textbook might look like,
  and to fill in each section within each chapter in this TOC with a few
  links to available online resources dealing with the topic of the
  section.

  However, all I found time to do today (March 25, 2008) is make the
  TOC. Maybe later I will fill in the links on each section's page, or
  maybe by the time I get around it some other folks will have done it.

  While nowhere near as good as a textbook, I do think this can be a
  valuable resource for those wanting to get up to speed on AGI concepts
  and not knowing where to turn to get started. There are some available
  AGI bibliographies, but a structured bibliography like this can
  probably be more useful than an unstructured and heterogeneous one.

  Naturally my initial TOC represents some of my own biases, but I trust
  that by having others help edit it, these biases will ultimately come
  out in the wash.

  Just to be clear: the idea here is not to present solely AGI material.
  Rather the idea is to present material that I think students would do
  well to know, if they want to work on AGI. This includes some AGI,
  some narrow AI, some psychology, some neuroscience, some mathematics,
  etc.

  ***


  -- Ben


  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  If men cease to believe that they will one day become gods then they
  will surely become worms.
  -- Henry Miller

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com




-- 
Robin Gane-McCalla
YIM: Robin_Ganemccalla
AIM: Robinganemccalla

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Richard Loosemore

Ben Goertzel wrote:

Hi all,

A lot of students email me asking me what to read to get up to speed on AGI.

So I started a wiki page called Instead of an AGI Textbook,

http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics

Unfortunately I did not yet find time to do much but outline a table
of contents there.


Ben,

Unfortunately I cannot bring myself to believe this will help anyone new 
to the area.


The main reason is that this is only a miscellaneous list of topics, 
with nothing to indicate a comprehensive theory or a unifying structure. 
 I do not ask for a complete unified theory, of course, but something 
more than just a collection of techniques is needed if this is to be a 
textbook.


A second reason for being skeptical is that there is virtually no 
cognitive psychology in this list - just a smattering of odd topics.


As you know, I have argued elsewhere that keeping close to the design of 
the human mind is the *only* way to build an artificial general 
intelligence.  You completely disagree with this, and I respect your 
point of view, but given that there is at least one other AGI researcher 
(me) who believes that cognitive psychology is extremely significant, it 
seems bizarre that your list does not even include a comprehensive 
introduction to that field.  How could a new person who wanted to get 
into AGI make a judgment of the value of cognitive psychology if they 
had nothing but a superficial appreciation of it?


Finally, you said that Unfortunately, I don't have time to write such a 
textbook, and no one else with the requisite knowledge and ability seems 
to have the time and inclination either.


This is not correct:  I have been working on this for quite some time, 
and I believe I mentioned that on at least one occasion before (though 
apologies if me memory is at fault there).




Richard loosemore

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
Richard,

  Unfortunately I cannot bring myself to believe this will help anyone new
  to the area.

  The main reason is that this is only a miscellaneous list of topics,
  with nothing to indicate a comprehensive theory or a unifying structure.
   I do not ask for a complete unified theory, of course, but something
  more than just a collection of techniques is needed if this is to be a
  textbook.



I have my own comprehensive theory and unifying structure for AGI...

Pei has his...

You have yours...

Stan Franklin has his...

Etc.

These have been published with varying levels of detail in various
places ... I'll be publishing more of mine this year, in the PLN book, and
then in the OpenCog documentation and plans ... but many of the
conceptual aspects of my approach were already mentioned in
The Hidden Pattern

My goal in Instead of an AGI Textbook is **not** to present anyone's
unifying theory (not even my own) but rather to give pointers to
**what information a student should learn, in order to digest the various
unifying theories being proposed**.

To put it another way: Aside from a strong undergrad background in CS
and good programming skills, what would I like someone to know about
in order for them to work on Novamente or OpenCog or
some other vaguely similar AI project?

Not everything in my suggested TOC is actually used in Novamente or OpenCog...
but even the stuff that isn't, is interesting to know about if you're
going to work
on these things, just to have a general awareness of the various approaches
that have been taken to these problems...

  A second reason for being skeptical is that there is virtually no
  cognitive psychology in this list - just a smattering of odd topics.

Yes, that's a fair point -- that's a shortcoming of the draft TOC as I
posted it.

Please feel free to add some additional, relevant cog psych topics
to the page ;-)

-- Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
On Tue, Mar 25, 2008 at 9:39 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Richard,


Unfortunately I cannot bring myself to believe this will help anyone new
to the area.
  
The main reason is that this is only a miscellaneous list of topics,
with nothing to indicate a comprehensive theory or a unifying structure.

Actually it's not a haphazardly assembled miscellaneous list of topics
... it was
assembled with a purpose and structure in mind...

Specifically, I was thinking of OpenCog, and what it would be good for someone
to know in order to have a relatively full grasp of the OpenCog design.

As such, the topic list may contain stuff that is not relevant to your
AGI design,
and also may miss stuff that is critical to your AGI design...

But the non textbook is NOT intended as a presentation of OpenCog or any
other specific AGI theory or framework.  Rather, it is indeed,
largely, a grab bag
of relevant prerequisite information ... along with some information on specific
AGI approaches...

One problem I've found is that the traditional undergrad CS or AI education does
not actually give all the prerequisites for really grasping AGI
theories ... often
topics are touched in a particularly non-AGI-ish way ... for instance,
neural nets
are touched but complex dynamics in NN's are skipped ... Bayes nets are touched
but issues involving combining probability with more complex logic operations
are skipped ... neurons are discussed but theories of holistic brain function
are skipped ... etc.   The most AGI-relevant stuff always seems to get
skipped for
lack of time..!

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Pei Wang
Ben,

It is a good start!

Of course everyone else will disagree --- like what Richard did and
I'm going to do. ;-)

I'll try to find the time to provide my list --- at this moment, it
will be more like a reading list than a textbook TOC. In the future,
it will be integrated into the E-book I'm working on
(http://nars.wang.googlepages.com/gti-summary).

Compared to yours, mine will contain less math and algorithms, but
more psychology and philosophy.

I'd like to see what Richard and others want to propose. We shouldn't
try to merge them into one wiki page, but several.

Pei


On Tue, Mar 25, 2008 at 7:46 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 Hi all,

  A lot of students email me asking me what to read to get up to speed on AGI.

  So I started a wiki page called Instead of an AGI Textbook,

  
 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics

  Unfortunately I did not yet find time to do much but outline a table
  of contents there.

  So I'm hoping some of you can chip in and fill in some relevant
  hyperlinks on the pages
  I've created ;-)

  For those of you too lazy to click the above link, here is the
  introductory note I put on the wiki page:


  

  I've often lamented the fact that there is no advanced undergrad level
  textbook for AGI, analogous to what Russell and Norvig is for Narrow
  AI.

  Unfortunately, I don't have time to write such a textbook, and no one
  else with the requisite knowledge and ability seems to have the time
  and inclination either.

  So, instead of a textbook, I thought it would make sense to outline
  here what the table of contents of such a textbook might look like,
  and to fill in each section within each chapter in this TOC with a few
  links to available online resources dealing with the topic of the
  section.

  However, all I found time to do today (March 25, 2008) is make the
  TOC. Maybe later I will fill in the links on each section's page, or
  maybe by the time I get around it some other folks will have done it.

  While nowhere near as good as a textbook, I do think this can be a
  valuable resource for those wanting to get up to speed on AGI concepts
  and not knowing where to turn to get started. There are some available
  AGI bibliographies, but a structured bibliography like this can
  probably be more useful than an unstructured and heterogeneous one.

  Naturally my initial TOC represents some of my own biases, but I trust
  that by having others help edit it, these biases will ultimately come
  out in the wash.

  Just to be clear: the idea here is not to present solely AGI material.
  Rather the idea is to present material that I think students would do
  well to know, if they want to work on AGI. This includes some AGI,
  some narrow AI, some psychology, some neuroscience, some mathematics,
  etc.

  ***


  -- Ben


  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]

  If men cease to believe that they will one day become gods then they
  will surely become worms.
  -- Henry Miller

  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com


---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
  I'll try to find the time to provide my list --- at this moment, it
  will be more like a reading list than a textbook TOC.

That would be great -- however I may integrate your reading
list into my TOC ... as I really think there is value in a structured
and categorized reading list rather than just a list...

I know every researcher will have their own foci, but I'm going
to try to unify different researchers' suggestions into a single
TOC with a sensible organization, because I would like to cut
through the confusion faced by students starting out in this
field of research...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Aki Iskandar
Thanks Ben.  AGI is a daunting field to say the least.  Many scientific
domains are involved in various degrees.  I am very happy to see  something
like this, because knowing where to start is not so obvious for the
beginner.  I actually recently purchased Artificial Intelligence: A Modern
Approach - but only because I did not know where else to start.  I have the
programming down - but, like most others, I don't know *what* to program.

I really hope that others will contribute to your TOC.  In fact, I am
willing to put up and host an AGI Wiki if theis community would find it of
use.  I'd need a few weeks - because I don't have the time right now - but
it is a worthwhile endeavor, and I'm happy to do it.

~Aki



On Tue, Mar 25, 2008 at 6:46 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

 Hi all,

 A lot of students email me asking me what to read to get up to speed on
 AGI.

 So I started a wiki page called Instead of an AGI Textbook,


 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics

 Unfortunately I did not yet find time to do much but outline a table
 of contents there.

 So I'm hoping some of you can chip in and fill in some relevant
 hyperlinks on the pages
 I've created ;-)

 For those of you too lazy to click the above link, here is the
 introductory note I put on the wiki page:


 

 I've often lamented the fact that there is no advanced undergrad level
 textbook for AGI, analogous to what Russell and Norvig is for Narrow
 AI.

 Unfortunately, I don't have time to write such a textbook, and no one
 else with the requisite knowledge and ability seems to have the time
 and inclination either.

 So, instead of a textbook, I thought it would make sense to outline
 here what the table of contents of such a textbook might look like,
 and to fill in each section within each chapter in this TOC with a few
 links to available online resources dealing with the topic of the
 section.

 However, all I found time to do today (March 25, 2008) is make the
 TOC. Maybe later I will fill in the links on each section's page, or
 maybe by the time I get around it some other folks will have done it.

 While nowhere near as good as a textbook, I do think this can be a
 valuable resource for those wanting to get up to speed on AGI concepts
 and not knowing where to turn to get started. There are some available
 AGI bibliographies, but a structured bibliography like this can
 probably be more useful than an unstructured and heterogeneous one.

 Naturally my initial TOC represents some of my own biases, but I trust
 that by having others help edit it, these biases will ultimately come
 out in the wash.

 Just to be clear: the idea here is not to present solely AGI material.
 Rather the idea is to present material that I think students would do
 well to know, if they want to work on AGI. This includes some AGI,
 some narrow AI, some psychology, some neuroscience, some mathematics,
 etc.

 ***


 -- Ben


 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind LLC
 Director of Research, SIAI
 [EMAIL PROTECTED]

 If men cease to believe that they will one day become gods then they
 will surely become worms.
 -- Henry Miller

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Aki R. Iskandar
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
Yeah, the AGIRI wiki has been there for years ... the hard thing is
getting people
to contribute to it (and I myself rarely find the time...)

But if others don't chip in, I'll complete my little non-textbook
myself sometime w/in
the next month ...

-- Ben

On Tue, Mar 25, 2008 at 10:52 PM, Aki Iskandar [EMAIL PROTECTED] wrote:
 Ok - that was silly of me.  After visiting the link (which was after I sent
 the email), I noticed that is WAS a Wiki.

 My apologies.

 ~Aki




 On Tue, Mar 25, 2008 at 9:47 PM, Aki Iskandar [EMAIL PROTECTED] wrote:

  Thanks Ben.  AGI is a daunting field to say the least.  Many scientific
 domains are involved in various degrees.  I am very happy to see  something
 like this, because knowing where to start is not so obvious for the
 beginner.  I actually recently purchased Artificial Intelligence: A Modern
 Approach - but only because I did not know where else to start.  I have the
 programming down - but, like most others, I don't know *what* to program.
 
  I really hope that others will contribute to your TOC.  In fact, I am
 willing to put up and host an AGI Wiki if theis community would find it of
 use.  I'd need a few weeks - because I don't have the time right now - but
 it is a worthwhile endeavor, and I'm happy to do it.
 
  ~Aki
 
 
 
 
 
  On Tue, Mar 25, 2008 at 6:46 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
 
 
 
 
   Hi all,
  
   A lot of students email me asking me what to read to get up to speed on
 AGI.
  
   So I started a wiki page called Instead of an AGI Textbook,
  
  
 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
  
   Unfortunately I did not yet find time to do much but outline a table
   of contents there.
  
   So I'm hoping some of you can chip in and fill in some relevant
   hyperlinks on the pages
   I've created ;-)
  
   For those of you too lazy to click the above link, here is the
   introductory note I put on the wiki page:
  
  
   
  
   I've often lamented the fact that there is no advanced undergrad level
   textbook for AGI, analogous to what Russell and Norvig is for Narrow
   AI.
  
   Unfortunately, I don't have time to write such a textbook, and no one
   else with the requisite knowledge and ability seems to have the time
   and inclination either.
  
   So, instead of a textbook, I thought it would make sense to outline
   here what the table of contents of such a textbook might look like,
   and to fill in each section within each chapter in this TOC with a few
   links to available online resources dealing with the topic of the
   section.
  
   However, all I found time to do today (March 25, 2008) is make the
   TOC. Maybe later I will fill in the links on each section's page, or
   maybe by the time I get around it some other folks will have done it.
  
   While nowhere near as good as a textbook, I do think this can be a
   valuable resource for those wanting to get up to speed on AGI concepts
   and not knowing where to turn to get started. There are some available
   AGI bibliographies, but a structured bibliography like this can
   probably be more useful than an unstructured and heterogeneous one.
  
   Naturally my initial TOC represents some of my own biases, but I trust
   that by having others help edit it, these biases will ultimately come
   out in the wash.
  
   Just to be clear: the idea here is not to present solely AGI material.
   Rather the idea is to present material that I think students would do
   well to know, if they want to work on AGI. This includes some AGI,
   some narrow AI, some psychology, some neuroscience, some mathematics,
   etc.
  
   ***
  
  
   -- Ben
  
  
   --
   Ben Goertzel, PhD
   CEO, Novamente LLC and Biomind LLC
   Director of Research, SIAI
   [EMAIL PROTECTED]
  
   If men cease to believe that they will one day become gods then they
   will surely become worms.
   -- Henry Miller
  
   ---
   agi
   Archives: http://www.listbox.com/member/archive/303/=now
   RSS Feed: http://www.listbox.com/member/archive/rss/303/
   Modify Your Subscription: http://www.listbox.com/member/?;
   Powered by Listbox: http://www.listbox.com
  
 
 
 
  --
  Aki R. Iskandar
  [EMAIL PROTECTED]



 --
 Aki R. Iskandar
 [EMAIL PROTECTED]
  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
  I actually recently purchased Artificial Intelligence: A Modern
 Approach - but only because I did not know where else to start.

It's a very good book ... if you view it as providing insight into various
component technologies of potential use for AGI ... rather than as saying
very much directly about AGI...

I have the
 programming down - but, like most others, I don't know *what* to program.

Well I hope to solve that problem in May -- via releasing the initial version
of OpenCog, plus a load of wiki pages indicating stuff that, IMO, if
implemented,
tuned and tested would allow OpenCog to be turned into a powerful AGI
system ;-)

-- Ben



Ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Aki Iskandar
Ok - that was silly of me.  After visiting the link (which was after I sent
the email), I noticed that is WAS a Wiki.

My apologies.

~Aki


On Tue, Mar 25, 2008 at 9:47 PM, Aki Iskandar [EMAIL PROTECTED] wrote:

 Thanks Ben.  AGI is a daunting field to say the least.  Many scientific
 domains are involved in various degrees.  I am very happy to see  something
 like this, because knowing where to start is not so obvious for the
 beginner.  I actually recently purchased Artificial Intelligence: A Modern
 Approach - but only because I did not know where else to start.  I have the
 programming down - but, like most others, I don't know *what* to program.

 I really hope that others will contribute to your TOC.  In fact, I am
 willing to put up and host an AGI Wiki if theis community would find it of
 use.  I'd need a few weeks - because I don't have the time right now - but
 it is a worthwhile endeavor, and I'm happy to do it.

 ~Aki



 On Tue, Mar 25, 2008 at 6:46 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

  Hi all,
 
  A lot of students email me asking me what to read to get up to speed on
  AGI.
 
  So I started a wiki page called Instead of an AGI Textbook,
 
 
  http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
 
  Unfortunately I did not yet find time to do much but outline a table
  of contents there.
 
  So I'm hoping some of you can chip in and fill in some relevant
  hyperlinks on the pages
  I've created ;-)
 
  For those of you too lazy to click the above link, here is the
  introductory note I put on the wiki page:
 
 
  
 
  I've often lamented the fact that there is no advanced undergrad level
  textbook for AGI, analogous to what Russell and Norvig is for Narrow
  AI.
 
  Unfortunately, I don't have time to write such a textbook, and no one
  else with the requisite knowledge and ability seems to have the time
  and inclination either.
 
  So, instead of a textbook, I thought it would make sense to outline
  here what the table of contents of such a textbook might look like,
  and to fill in each section within each chapter in this TOC with a few
  links to available online resources dealing with the topic of the
  section.
 
  However, all I found time to do today (March 25, 2008) is make the
  TOC. Maybe later I will fill in the links on each section's page, or
  maybe by the time I get around it some other folks will have done it.
 
  While nowhere near as good as a textbook, I do think this can be a
  valuable resource for those wanting to get up to speed on AGI concepts
  and not knowing where to turn to get started. There are some available
  AGI bibliographies, but a structured bibliography like this can
  probably be more useful than an unstructured and heterogeneous one.
 
  Naturally my initial TOC represents some of my own biases, but I trust
  that by having others help edit it, these biases will ultimately come
  out in the wash.
 
  Just to be clear: the idea here is not to present solely AGI material.
  Rather the idea is to present material that I think students would do
  well to know, if they want to work on AGI. This includes some AGI,
  some narrow AI, some psychology, some neuroscience, some mathematics,
  etc.
 
  ***
 
 
  -- Ben
 
 
  --
  Ben Goertzel, PhD
  CEO, Novamente LLC and Biomind LLC
  Director of Research, SIAI
  [EMAIL PROTECTED]
 
  If men cease to believe that they will one day become gods then they
  will surely become worms.
  -- Henry Miller
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription:
  http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 



 --
 Aki R. Iskandar
 [EMAIL PROTECTED]




-- 
Aki R. Iskandar
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Aki Iskandar
Hi Pei -

What about having a tree like diagram that branches out into either:
- the different paths / approaches to AGI (for instance: NARS, Novamente,
and Richard's, etc.), with suggested readings at those leaves
- area of study, with suggested readings at those leaves

Or possibly, a Mind Map diagram that shows AGI in the middle, with the
approaches stemming from it, and then either sub fields, or a reading list
and / or collection of links (though the links may become outdated, dead).

Point is, would a diagram help map the field - which caters to the
differing approaches, and which helps those wanting to chart a course to
their own learning/study ?

Thanks,
~Aki


On Tue, Mar 25, 2008 at 9:22 PM, Pei Wang [EMAIL PROTECTED] wrote:

 Ben,

 It is a good start!

 Of course everyone else will disagree --- like what Richard did and
 I'm going to do. ;-)

 I'll try to find the time to provide my list --- at this moment, it
 will be more like a reading list than a textbook TOC. In the future,
 it will be integrated into the E-book I'm working on
 (http://nars.wang.googlepages.com/gti-summary).

 Compared to yours, mine will contain less math and algorithms, but
 more psychology and philosophy.

 I'd like to see what Richard and others want to propose. We shouldn't
 try to merge them into one wiki page, but several.

 Pei


 On Tue, Mar 25, 2008 at 7:46 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
  Hi all,
 
   A lot of students email me asking me what to read to get up to speed on
 AGI.
 
   So I started a wiki page called Instead of an AGI Textbook,
 
 
 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
 
   Unfortunately I did not yet find time to do much but outline a table
   of contents there.
 
   So I'm hoping some of you can chip in and fill in some relevant
   hyperlinks on the pages
   I've created ;-)
 
   For those of you too lazy to click the above link, here is the
   introductory note I put on the wiki page:
 
 
   
 
   I've often lamented the fact that there is no advanced undergrad level
   textbook for AGI, analogous to what Russell and Norvig is for Narrow
   AI.
 
   Unfortunately, I don't have time to write such a textbook, and no one
   else with the requisite knowledge and ability seems to have the time
   and inclination either.
 
   So, instead of a textbook, I thought it would make sense to outline
   here what the table of contents of such a textbook might look like,
   and to fill in each section within each chapter in this TOC with a few
   links to available online resources dealing with the topic of the
   section.
 
   However, all I found time to do today (March 25, 2008) is make the
   TOC. Maybe later I will fill in the links on each section's page, or
   maybe by the time I get around it some other folks will have done it.
 
   While nowhere near as good as a textbook, I do think this can be a
   valuable resource for those wanting to get up to speed on AGI concepts
   and not knowing where to turn to get started. There are some available
   AGI bibliographies, but a structured bibliography like this can
   probably be more useful than an unstructured and heterogeneous one.
 
   Naturally my initial TOC represents some of my own biases, but I trust
   that by having others help edit it, these biases will ultimately come
   out in the wash.
 
   Just to be clear: the idea here is not to present solely AGI material.
   Rather the idea is to present material that I think students would do
   well to know, if they want to work on AGI. This includes some AGI,
   some narrow AI, some psychology, some neuroscience, some mathematics,
   etc.
 
   ***
 
 
   -- Ben
 
 
   --
   Ben Goertzel, PhD
   CEO, Novamente LLC and Biomind LLC
   Director of Research, SIAI
   [EMAIL PROTECTED]
 
   If men cease to believe that they will one day become gods then they
   will surely become worms.
   -- Henry Miller
 
   ---
   agi
   Archives: http://www.listbox.com/member/archive/303/=now
   RSS Feed: http://www.listbox.com/member/archive/rss/303/
   Modify Your Subscription: http://www.listbox.com/member/?;
   Powered by Listbox: http://www.listbox.com
 

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Aki R. Iskandar
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Aki Iskandar
Thanks Ben.  That is really exciting stuff / news.  I'm loking forward to
OpenCog.

BTW - is OpenCog mainly in C++ (like Novamente) ?  Or is it translations (to
Java, or other languages) of concepts so that others can code  and add to it
more readily and quickly?

Thanks,
~Aki

On Tue, Mar 25, 2008 at 9:58 PM, Ben Goertzel [EMAIL PROTECTED] wrote:

   I actually recently purchased Artificial Intelligence: A Modern
  Approach - but only because I did not know where else to start.

 It's a very good book ... if you view it as providing insight into various
 component technologies of potential use for AGI ... rather than as saying
 very much directly about AGI...

 I have the
  programming down - but, like most others, I don't know *what* to
 program.

 Well I hope to solve that problem in May -- via releasing the initial
 version
 of OpenCog, plus a load of wiki pages indicating stuff that, IMO, if
 implemented,
 tuned and tested would allow OpenCog to be turned into a powerful AGI
 system ;-)

 -- Ben



 Ben

 ---
 agi
 Archives: http://www.listbox.com/member/archive/303/=now
 RSS Feed: http://www.listbox.com/member/archive/rss/303/
 Modify Your Subscription:
 http://www.listbox.com/member/?;
 Powered by Listbox: http://www.listbox.com




-- 
Aki R. Iskandar
[EMAIL PROTECTED]

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Pei Wang
On Tue, Mar 25, 2008 at 11:02 PM, Aki Iskandar [EMAIL PROTECTED] wrote:
 Hi Pei -

 What about having a tree like diagram that branches out into either:
 - the different paths / approaches to AGI (for instance: NARS, Novamente,
 and Richard's, etc.), with suggested readings at those leaves
  - area of study, with suggested readings at those leaves

Yes, that is what I like. I know Ben would rather stress the
similarity of the approaches, and merge all the lists into one, but
I'd rather keep the difference visible. One reason is otherwise the
list will be too long for anyone to follow.

 Or possibly, a Mind Map diagram that shows AGI in the middle, with the
 approaches stemming from it, and then either sub fields, or a reading list
 and / or collection of links (though the links may become outdated, dead).

 Point is, would a diagram help map the field - which caters to the
 differing approaches, and which helps those wanting to chart a course to
 their own learning/study ?

In principle, yes, but a diagram with many text in it tends to look
confusing. After we get the lists, you can play with them to see what
is the best way to show the information.

Thanks,

Pei

 Thanks,
 ~Aki




  On Tue, Mar 25, 2008 at 9:22 PM, Pei Wang [EMAIL PROTECTED] wrote:
 
 
 
  Ben,
 
  It is a good start!
 
  Of course everyone else will disagree --- like what Richard did and
  I'm going to do. ;-)
 
  I'll try to find the time to provide my list --- at this moment, it
  will be more like a reading list than a textbook TOC. In the future,
  it will be integrated into the E-book I'm working on
  (http://nars.wang.googlepages.com/gti-summary).
 
  Compared to yours, mine will contain less math and algorithms, but
  more psychology and philosophy.
 
  I'd like to see what Richard and others want to propose. We shouldn't
  try to merge them into one wiki page, but several.
 
  Pei
 
 
 
 
 
  On Tue, Mar 25, 2008 at 7:46 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
   Hi all,
  
A lot of students email me asking me what to read to get up to speed on
 AGI.
  
So I started a wiki page called Instead of an AGI Textbook,
  
  
 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
  
Unfortunately I did not yet find time to do much but outline a table
of contents there.
  
So I'm hoping some of you can chip in and fill in some relevant
hyperlinks on the pages
I've created ;-)
  
For those of you too lazy to click the above link, here is the
introductory note I put on the wiki page:
  
  

  
I've often lamented the fact that there is no advanced undergrad level
textbook for AGI, analogous to what Russell and Norvig is for Narrow
AI.
  
Unfortunately, I don't have time to write such a textbook, and no one
else with the requisite knowledge and ability seems to have the time
and inclination either.
  
So, instead of a textbook, I thought it would make sense to outline
here what the table of contents of such a textbook might look like,
and to fill in each section within each chapter in this TOC with a few
links to available online resources dealing with the topic of the
section.
  
However, all I found time to do today (March 25, 2008) is make the
TOC. Maybe later I will fill in the links on each section's page, or
maybe by the time I get around it some other folks will have done it.
  
While nowhere near as good as a textbook, I do think this can be a
valuable resource for those wanting to get up to speed on AGI concepts
and not knowing where to turn to get started. There are some available
AGI bibliographies, but a structured bibliography like this can
probably be more useful than an unstructured and heterogeneous one.
  
Naturally my initial TOC represents some of my own biases, but I trust
that by having others help edit it, these biases will ultimately come
out in the wash.
  
Just to be clear: the idea here is not to present solely AGI material.
Rather the idea is to present material that I think students would do
well to know, if they want to work on AGI. This includes some AGI,
some narrow AI, some psychology, some neuroscience, some mathematics,
etc.
  
***
  
  
-- Ben
  
  
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
  
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
  
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
 
 
 
 
Powered by Listbox: http://www.listbox.com
  
 
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: 

Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Pei Wang
Agree.

Pei

On Tue, Mar 25, 2008 at 11:33 PM, Aki Iskandar [EMAIL PROTECTED] wrote:
 Sounds good Pei - thanks.  Multiple lists are definitely a great start - to
 stress differences. And a companion master list to stress similarities would
 also be helpful.

 Everyone learns differently - and though a master list may seem
 intimidating, it may better represent breadth - where several distinct lists
 may better represent a cohesive structure(s).  Having both seems to make
 sense.

 ~Aki


 On Tue, Mar 25, 2008 at 10:23 PM, Pei Wang [EMAIL PROTECTED] wrote:
 
  On Tue, Mar 25, 2008 at 11:02 PM, Aki Iskandar [EMAIL PROTECTED] wrote:
   Hi Pei -
  
   What about having a tree like diagram that branches out into either:
   - the different paths / approaches to AGI (for instance: NARS,
 Novamente,
   and Richard's, etc.), with suggested readings at those leaves
- area of study, with suggested readings at those leaves
 
  Yes, that is what I like. I know Ben would rather stress the
  similarity of the approaches, and merge all the lists into one, but
  I'd rather keep the difference visible. One reason is otherwise the
  list will be too long for anyone to follow.
 
 
   Or possibly, a Mind Map diagram that shows AGI in the middle, with the
   approaches stemming from it, and then either sub fields, or a reading
 list
   and / or collection of links (though the links may become outdated,
 dead).
  
   Point is, would a diagram help map the field - which caters to the
   differing approaches, and which helps those wanting to chart a course to
   their own learning/study ?
 
  In principle, yes, but a diagram with many text in it tends to look
  confusing. After we get the lists, you can play with them to see what
  is the best way to show the information.
 
  Thanks,
 
  Pei
 
 
 
 
   Thanks,
   ~Aki
  
  
  
  
On Tue, Mar 25, 2008 at 9:22 PM, Pei Wang [EMAIL PROTECTED]
 wrote:
   
   
   
Ben,
   
It is a good start!
   
Of course everyone else will disagree --- like what Richard did and
I'm going to do. ;-)
   
I'll try to find the time to provide my list --- at this moment, it
will be more like a reading list than a textbook TOC. In the future,
it will be integrated into the E-book I'm working on
(http://nars.wang.googlepages.com/gti-summary).
   
Compared to yours, mine will contain less math and algorithms, but
more psychology and philosophy.
   
I'd like to see what Richard and others want to propose. We shouldn't
try to merge them into one wiki page, but several.
   
Pei
   
   
   
   
   
On Tue, Mar 25, 2008 at 7:46 PM, Ben Goertzel [EMAIL PROTECTED]
 wrote:
 Hi all,

  A lot of students email me asking me what to read to get up to
 speed on
   AGI.

  So I started a wiki page called Instead of an AGI Textbook,


  
 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics

  Unfortunately I did not yet find time to do much but outline a
 table
  of contents there.

  So I'm hoping some of you can chip in and fill in some relevant
  hyperlinks on the pages
  I've created ;-)

  For those of you too lazy to click the above link, here is the
  introductory note I put on the wiki page:


  

  I've often lamented the fact that there is no advanced undergrad
 level
  textbook for AGI, analogous to what Russell and Norvig is for
 Narrow
  AI.

  Unfortunately, I don't have time to write such a textbook, and no
 one
  else with the requisite knowledge and ability seems to have the
 time
  and inclination either.

  So, instead of a textbook, I thought it would make sense to outline
  here what the table of contents of such a textbook might look like,
  and to fill in each section within each chapter in this TOC with a
 few
  links to available online resources dealing with the topic of the
  section.

  However, all I found time to do today (March 25, 2008) is make the
  TOC. Maybe later I will fill in the links on each section's page,
 or
  maybe by the time I get around it some other folks will have done
 it.

  While nowhere near as good as a textbook, I do think this can be a
  valuable resource for those wanting to get up to speed on AGI
 concepts
  and not knowing where to turn to get started. There are some
 available
  AGI bibliographies, but a structured bibliography like this can
  probably be more useful than an unstructured and heterogeneous one.

  Naturally my initial TOC represents some of my own biases, but I
 trust
  that by having others help edit it, these biases will ultimately
 come
  out in the wash.

  Just to be clear: the idea here is not to present solely AGI
 material.
  Rather the idea is to present material that I think students would
 do
  well to know, if they want to work on AGI. 

Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Aki Iskandar
Sounds good Pei - thanks.  Multiple lists are definitely a great start - to
stress differences. And a companion master list to stress similarities would
also be helpful.

Everyone learns differently - and though a master list may seem
intimidating, it may better represent breadth - where several distinct lists
may better represent a cohesive structure(s).  Having both seems to make
sense.

~Aki

On Tue, Mar 25, 2008 at 10:23 PM, Pei Wang [EMAIL PROTECTED] wrote:

 On Tue, Mar 25, 2008 at 11:02 PM, Aki Iskandar [EMAIL PROTECTED] wrote:
  Hi Pei -
 
  What about having a tree like diagram that branches out into either:
  - the different paths / approaches to AGI (for instance: NARS,
 Novamente,
  and Richard's, etc.), with suggested readings at those leaves
   - area of study, with suggested readings at those leaves

 Yes, that is what I like. I know Ben would rather stress the
 similarity of the approaches, and merge all the lists into one, but
 I'd rather keep the difference visible. One reason is otherwise the
 list will be too long for anyone to follow.

  Or possibly, a Mind Map diagram that shows AGI in the middle, with the
  approaches stemming from it, and then either sub fields, or a reading
 list
  and / or collection of links (though the links may become outdated,
 dead).
 
  Point is, would a diagram help map the field - which caters to the
  differing approaches, and which helps those wanting to chart a course to
  their own learning/study ?

 In principle, yes, but a diagram with many text in it tends to look
 confusing. After we get the lists, you can play with them to see what
 is the best way to show the information.

 Thanks,

 Pei

  Thanks,
  ~Aki
 
 
 
 
   On Tue, Mar 25, 2008 at 9:22 PM, Pei Wang [EMAIL PROTECTED]
 wrote:
  
  
  
   Ben,
  
   It is a good start!
  
   Of course everyone else will disagree --- like what Richard did and
   I'm going to do. ;-)
  
   I'll try to find the time to provide my list --- at this moment, it
   will be more like a reading list than a textbook TOC. In the future,
   it will be integrated into the E-book I'm working on
   (http://nars.wang.googlepages.com/gti-summary).
  
   Compared to yours, mine will contain less math and algorithms, but
   more psychology and philosophy.
  
   I'd like to see what Richard and others want to propose. We shouldn't
   try to merge them into one wiki page, but several.
  
   Pei
  
  
  
  
  
   On Tue, Mar 25, 2008 at 7:46 PM, Ben Goertzel [EMAIL PROTECTED]
 wrote:
Hi all,
   
 A lot of students email me asking me what to read to get up to
 speed on
  AGI.
   
 So I started a wiki page called Instead of an AGI Textbook,
   
   
 
 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
   
 Unfortunately I did not yet find time to do much but outline a
 table
 of contents there.
   
 So I'm hoping some of you can chip in and fill in some relevant
 hyperlinks on the pages
 I've created ;-)
   
 For those of you too lazy to click the above link, here is the
 introductory note I put on the wiki page:
   
   
 
   
 I've often lamented the fact that there is no advanced undergrad
 level
 textbook for AGI, analogous to what Russell and Norvig is for
 Narrow
 AI.
   
 Unfortunately, I don't have time to write such a textbook, and no
 one
 else with the requisite knowledge and ability seems to have the
 time
 and inclination either.
   
 So, instead of a textbook, I thought it would make sense to outline
 here what the table of contents of such a textbook might look like,
 and to fill in each section within each chapter in this TOC with a
 few
 links to available online resources dealing with the topic of the
 section.
   
 However, all I found time to do today (March 25, 2008) is make the
 TOC. Maybe later I will fill in the links on each section's page,
 or
 maybe by the time I get around it some other folks will have done
 it.
   
 While nowhere near as good as a textbook, I do think this can be a
 valuable resource for those wanting to get up to speed on AGI
 concepts
 and not knowing where to turn to get started. There are some
 available
 AGI bibliographies, but a structured bibliography like this can
 probably be more useful than an unstructured and heterogeneous one.
   
 Naturally my initial TOC represents some of my own biases, but I
 trust
 that by having others help edit it, these biases will ultimately
 come
 out in the wash.
   
 Just to be clear: the idea here is not to present solely AGI
 material.
 Rather the idea is to present material that I think students would
 do
 well to know, if they want to work on AGI. This includes some AGI,
 some narrow AI, some psychology, some neuroscience, some
 mathematics,
 etc.
   
 ***
   
   
 -- Ben
   
   
 --
 Ben Goertzel, PhD
 CEO, Novamente LLC and Biomind 

Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
On Tue, Mar 25, 2008 at 11:07 PM, Aki Iskandar [EMAIL PROTECTED] wrote:
 Thanks Ben.  That is really exciting stuff / news.  I'm loking forward to
 OpenCog.

 BTW - is OpenCog mainly in C++ (like Novamente) ?  Or is it translations (to
 Java, or other languages) of concepts so that others can code  and add to it
 more readily and quickly?

yes, the OpenCog core system is C++ , though there are some peripheral
code libraries (e.g. the RelEx natural language preprocessor) which are in
Java...

ben

---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244id_secret=98558129-0bdb63
Powered by Listbox: http://www.listbox.com


Re: [agi] Instead of an AGI Textbook

2008-03-25 Thread Ben Goertzel
This kind of diagram would certainly be meaningful, but, it would be a
lot of work to put together, even more so than a traditional TOC ...

On Tue, Mar 25, 2008 at 11:02 PM, Aki Iskandar [EMAIL PROTECTED] wrote:
 Hi Pei -

 What about having a tree like diagram that branches out into either:
 - the different paths / approaches to AGI (for instance: NARS, Novamente,
 and Richard's, etc.), with suggested readings at those leaves
  - area of study, with suggested readings at those leaves

 Or possibly, a Mind Map diagram that shows AGI in the middle, with the
 approaches stemming from it, and then either sub fields, or a reading list
 and / or collection of links (though the links may become outdated, dead).

 Point is, would a diagram help map the field - which caters to the
 differing approaches, and which helps those wanting to chart a course to
 their own learning/study ?

 Thanks,
 ~Aki




  On Tue, Mar 25, 2008 at 9:22 PM, Pei Wang [EMAIL PROTECTED] wrote:
  Ben,
 
  It is a good start!
 
  Of course everyone else will disagree --- like what Richard did and
  I'm going to do. ;-)
 
  I'll try to find the time to provide my list --- at this moment, it
  will be more like a reading list than a textbook TOC. In the future,
  it will be integrated into the E-book I'm working on
  (http://nars.wang.googlepages.com/gti-summary).
 
  Compared to yours, mine will contain less math and algorithms, but
  more psychology and philosophy.
 
  I'd like to see what Richard and others want to propose. We shouldn't
  try to merge them into one wiki page, but several.
 
  Pei
 
 
 
 
 
  On Tue, Mar 25, 2008 at 7:46 PM, Ben Goertzel [EMAIL PROTECTED] wrote:
   Hi all,
  
A lot of students email me asking me what to read to get up to speed on
 AGI.
  
So I started a wiki page called Instead of an AGI Textbook,
  
  
 http://www.agiri.org/wiki/Instead_of_an_AGI_Textbook#Computational_Linguistics
  
Unfortunately I did not yet find time to do much but outline a table
of contents there.
  
So I'm hoping some of you can chip in and fill in some relevant
hyperlinks on the pages
I've created ;-)
  
For those of you too lazy to click the above link, here is the
introductory note I put on the wiki page:
  
  

  
I've often lamented the fact that there is no advanced undergrad level
textbook for AGI, analogous to what Russell and Norvig is for Narrow
AI.
  
Unfortunately, I don't have time to write such a textbook, and no one
else with the requisite knowledge and ability seems to have the time
and inclination either.
  
So, instead of a textbook, I thought it would make sense to outline
here what the table of contents of such a textbook might look like,
and to fill in each section within each chapter in this TOC with a few
links to available online resources dealing with the topic of the
section.
  
However, all I found time to do today (March 25, 2008) is make the
TOC. Maybe later I will fill in the links on each section's page, or
maybe by the time I get around it some other folks will have done it.
  
While nowhere near as good as a textbook, I do think this can be a
valuable resource for those wanting to get up to speed on AGI concepts
and not knowing where to turn to get started. There are some available
AGI bibliographies, but a structured bibliography like this can
probably be more useful than an unstructured and heterogeneous one.
  
Naturally my initial TOC represents some of my own biases, but I trust
that by having others help edit it, these biases will ultimately come
out in the wash.
  
Just to be clear: the idea here is not to present solely AGI material.
Rather the idea is to present material that I think students would do
well to know, if they want to work on AGI. This includes some AGI,
some narrow AI, some psychology, some neuroscience, some mathematics,
etc.
  
***
  
  
-- Ben
  
  
--
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]
  
If men cease to believe that they will one day become gods then they
will surely become worms.
-- Henry Miller
  
---
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: http://www.listbox.com/member/?;
 
 
 
Powered by Listbox: http://www.listbox.com
  
 
  ---
  agi
  Archives: http://www.listbox.com/member/archive/303/=now
  RSS Feed: http://www.listbox.com/member/archive/rss/303/
  Modify Your Subscription: http://www.listbox.com/member/?;
  Powered by Listbox: http://www.listbox.com
 



 --

 Aki R. Iskandar
 [EMAIL PROTECTED]
  

  agi | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD