Re: [singularity] Vista/AGI

2008-04-14 Thread Ben Goertzel
Brain-scan accuracy is  a very crude proxy for understanding of brain
function; yet a much better proxy than anything existing for the case
of AGI...

On Sun, Apr 13, 2008 at 11:37 PM, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>
> Ben Goertzel wrote:
>
> > Hi,
> >
> >
> > >  Just my personal opinion...but it appears that the "exponential
> technology
> > > growth chart", which is used in many of the briefings, does not include
> > > AI/AGI. It is processing centric.  When you include AI/AGI the
> "exponential
> > > technology curve" flattens out in the coming years (5-7) and becomes
> part of
> > > a normal S curve of development.  While computer power and processing
> will
> > > increase exponentially (as nanotechnology grows) the area of AI will
> need
> > > more time to develop.
> > >
> > >  I would be interested in your thoughts.
> > >
> >
> > I think this is because progress toward general AI has been difficult
> > to quantify
> > in the past, and looks to remain difficult to quantify into the future...
> >
> > I am uncertain as to the extent to which this problem can be worked
> around,
> > though.
> >
> > Let me introduce an analogy problem
> >
> > "Understanding the operation of the brain better and better" is to
> > "scanning the brain with higher and higher spatiotemporal accuracy",
> > as "Creating more and more powerful AGI" is to what?
> >
> > ;-)
> >
> > The point is that understanding the brain is also a nebulous and
> > hard-to-quantify goal, but we make charts for it by treating "brain
> > scan accuracy" as a more easily quantifiable proxy variable.  What's a
> > comparable proxy variable for AGI?
> >
> > Suggestions welcome!
> >
>
>  Sadly, the analogy is a wee bit broken.
>
>  Brain scan accuracy as a measure of progress in understanding the operation
> of the brain is a measure that some cognitive neuroscientists may subscribe
> to, but the majority of cognitive scientists outside of that area consider
> this to be a completely spurious idea.
>
>  Doug Hofstadter said this eloquently in "I Am A Strange Loop":  getting a
> complete atom-scan in the vicinity of a windmill doesn't mean that you are
> making progress toward understanding why the windmill goes around. It just
> gives you a data analysis problem that will keep you busy until everyone in
> the Hot Place is eating ice cream.
>
>
>
>
>  Richard Loosemore
>
>
>
>  ---
>  singularity
>  Archives: http://www.listbox.com/member/archive/11983/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
>  Modify Your Subscription:
> http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Ben Goertzel
Hi,

>  Just my personal opinion...but it appears that the "exponential technology
> growth chart", which is used in many of the briefings, does not include
> AI/AGI. It is processing centric.  When you include AI/AGI the "exponential
> technology curve" flattens out in the coming years (5-7) and becomes part of
> a normal S curve of development.  While computer power and processing will
> increase exponentially (as nanotechnology grows) the area of AI will need
> more time to develop.
>
>  I would be interested in your thoughts.

I think this is because progress toward general AI has been difficult
to quantify
in the past, and looks to remain difficult to quantify into the future...

I am uncertain as to the extent to which this problem can be worked around,
though.

Let me introduce an analogy problem

"Understanding the operation of the brain better and better" is to
"scanning the brain with higher and higher spatiotemporal accuracy",
as "Creating more and more powerful AGI" is to what?

;-)

The point is that understanding the brain is also a nebulous and
hard-to-quantify goal, but we make charts for it by treating "brain
scan accuracy" as a more easily quantifiable proxy variable.  What's a
comparable proxy variable for AGI?

Suggestions welcome!

-- Ben

Ben

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Ben Goertzel
>  I don't think any reasonable person in AI or AGI will claim any of these
> have been solved. They may want to claim their method has promise, but not
> that it has actually solved any of them.

Yes -- it is true, we have not created a human-level AGI yet.  No serious
researcher disagrees.  So why is it worth repeating the point?

Similarly, up till the moment when the first astronauts walked on the moon,
you could have run around yelping that "no one has solved the problem of
how to make a person walk on the moon, all they've done is propose methods
that seem to have promise."

It's true -- theories and ideas can always be wrong, and empirical proof adds
a whole new level of understanding.  (Though, empirical proofs don't exist
in a theoretical vacuum, they do require theoretical interpretation.
For instance
physicists don't agree on which supposed "top quark events" really were
top quarks ... and some nuts still don't believe people walked on the moon,
just as even after human-level AGI is achieved some nuts still won't believe
it...)

Nevertheless, with something as complex as AGI you gotta build stuff based
on a theory.  And not everyone is going to believe the theory until the proof
is there.  And so it goes...

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-13 Thread Ben Goertzel
Samantha,

>  You know, I am getting pretty tired of hearing this poor mouth crap.   This
> is not that huge a sum to raise or get financed.  Hell, there are some very
> futuristic rich geeks who could finance this single-handed and would not
> really care that much whether they could somehow monetize the result.   I
> don't believe for a minute that there is no way to do this.So exactly
> why are you singing this sad song year after year?
...
>  From what you said above $50M will do the entire job.   If that is all that
> is standing between us and AGI then surely we can get on with it in all
> haste.   If it is a great deal more than this relatively small amount of
> money then lets move on to talk about that instead of whining about lack of
> coin.


This is what I thought in 2001, and what Bruce Klein thought when he started
working with me in 2005.

In brief, what we thought is something like:

"
OK, so  ...

On the one hand, we have an AGI design that seems to its sane PhD-scientist
creator to have serious potential of leading to human-level AGI.  We have
a team of professional AI scientists and software engineers who are
a) knowledgeable about it, b) eager to work on it, c) in agreement that
it has a strong chance of leading to human-level AGI, although with
varying opinions on whether the timeline is, say, 7, 10, 15 or 20 years.
Furthermore, the individuals involved are at least thoughtful about issues
of AGI ethics and the social implications of their work.   Carefully-detailed
arguments as to why it is believed the AGI design will work exist, but,
these are complex, and furthermore do not comprise any sort of irrefutable
proof.

On the other hand, we have a number of wealthy transhumanists who would
love to see a beneficial human-level AGI come about, and who could
donate or invest some $$ to this cause without serious risk to their own
financial stability should the AGI effort fail.

Not only that, but there are a couple related factors

a) early non-AGI versions of some of the components of said AGI design
are already being used to help make biological discoveries of relevant
to life extension (as documented in refereed publications)

b) very clear plans exist, including discussions with many specific potential
customers, regarding how to make $$ from incremental products along the
way to the human-level AGI, if this is the pathway desired
"

So, we talked to a load of wealthy futurists and the upshot is that it's really
really hard to get these folks to believe you have a chance at achieving
human-level AGI.  These guys don't have the background to spend 6 months
carefully studying the technical documentation, so they make a gut decision,
which is always (so far) that "gee, you're a really smart guy, and your team
is great, and you're doing cool stuff, but technology just isn't there yet."

Novamente has gotten small (but much valued)
investments from some visionary folks, and SIAI has
had the vision to hire 1.6 folks to work on OpenCog, which is an
open-source sister project of the Novamente Cogntion Engine project.

I could speculate about the reasons behind this situation, but the reason is NOT
that I suck at raising money ... I have been involved in fundraising
for commercial
software projects before and have been successful at it.

I believe that in 10-15 years from now, one will be able to approach the exact
same people with the same sort of project, and get greeted with enthusiasm
rather than friendly dismissal.  Going against prevailing culture is
really hard,
even if you're dealing with people who **think** they're seeing beyond the
typical preconceptions of their culture.  Slowly though the idea that AGI is
possible and feasible is wending its way into the collective mind.

I stress, though, that if one had some kind of convincing, compelling **proof**
of being on the correct path to AGI, it would likely be possible to raise $$
for one's project.  This proof could be in several possible forms, e.g.

a) a mathematical proof, which was accepted by a substantial majority
of AI academics

b) a working software program that demonstrated human-child-like
functionality

c) a working robot that demonstrated full dog-like functionality

Also, if one had good enough personal connections with the right sort
of wealthy folks, one could raise the $$ -- based on their personal trust
in you rather than their trust in your ideas.

Or of course, being rich and funding your work yourself is always an
option (cf Jeff Hawkins)

This gets back to a milder version of an issue Richard Loosemore is
always raising; the complex systems problem.  My approach to AGI
is complex systems based, which means that the components are NOT
going to demonstrate any general intelligence -- the GI is intended
to come about as a holistic, whole-system phenomenon.  But not in any
kind of mysterious way: we have a detailed, specific theory of why
this will occur, in terms of the particular interactions between the
compone

Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Ben Goertzel
>  Of course what I imagine emerging from the Internet bears little resemblance
>  to Novamente.  It is simply too big to invest in directly, but it will 
> present
>  many opportunities.

But the emergence of superhuman AGI's like a Novamente may eventually become,
will both dramatically alter the nature of, and dramatically reduce
the cost of, "global
brains" such as you envision...

ben g

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: Promoting AGI (RE: [singularity] Vista/AGI)

2008-04-08 Thread Ben Goertzel
Well, Matt and I are talking about building totally different kinds of
systems...

I believe the system he wants to build would cost a huge amount ...
but I don't think
it's the most interesting sorta thing to build ...

A decent analogue would be spaceships.  All sorts of designs exist, some orders
of magnitude more complex and expensive than others.  It's more
practical to build
the cheaper ones, esp. when they're also more powerful ;-p

ben

On Tue, Apr 8, 2008 at 10:56 PM, Eric B. Ramsay <[EMAIL PROTECTED]> wrote:
> If I understand what I have read in this thread so far, there is Ben on the
> one hand suggesting $10 mil. with 10-30 people in 3 to 10 years and on the
> other there is Matt saying $1quadrillion, using a billion brains in 30
> years. I don't believe I have ever seen such a divergence of opinion before
> on what is required  for a technological breakthrough (unless people are not
> being serious and I am being naive). I suppose  this sort of non-consensus
> on such a scale could be part of investor reticence.
>
> Eric B. Ramsay
>
> Matt Mahoney <[EMAIL PROTECTED]> wrote:
>
>
> --- Mike Tintner wrote:
>
> > Matt : a super-google will answer these questions by routing them to
> > experts on these topics that will use natural language in their narrow
> > domains of expertise.
> >
> > And Santa will answer every child's request, and we'll all live happily
> ever
> > after. Amen.
>
> If you have a legitimate criticism of the technology or its funding plan, I
> would like to hear it. I understand there will be doubts about a system I
> expect to cost over $1 quadrillion and take 30 years to build.
>
> The protocol specifies natural language. This is not a hard problem in
> narrow
> domains. It dates back to the 1960's. Even in broad domains, most of the
> meaning of a message is independent of word order. Google works on this
> principle.
>
> But this is beside the point. The critical part of the design is an
> incentive
> for peers to provide useful services in exchange for resources. Peers that
> appear most intelligent and useful (and least annoying) are most likely to
> have their messages accepted and forwarded by other peers. People will
> develop domain experts and routers and put them on the net because they can
> make money through highly targeted advertising.
>
> Google would be a peer on the network with a high reputation. But Google
> controls only 0.1% of the computing power on the Internet. It will have to
> compete with a system that allows updates to be searched instantly, where
> queries are persistent, and where a query or message can initiate
> conversations with other people in real time.
>
> > Which are these areas of science, technology, arts, or indeed any area of
> > human activity, period, where the experts all agree and are NOT in deep
> > conflict?
> >
> > And if that's too hard a question, which are the areas of AI or AGI, where
> > the experts all agree and are not in deep conflict?
>
> I don't expect the experts to agree. It is better that they don't. There are
> hard problem remaining to be solved in language modeling, vision, and
> robotics. We need to try many approaches with powerful hardware. The network
> will decide who the winners are.
>
>
> -- Matt Mahoney, [EMAIL PROTECTED]
>
> ---
> singularity
> Archives: http://www.listbox.com/member/archive/11983/=now
> RSS Feed: http://www.listbox.com/member/archive/rss/11983/
> Modify Your Subscription: http://www.listbox.com/member/?&;
>
> Powered by Listbox: http://www.listbox.com
>
>
>  
>
>  singularity | Archives | Modify Your Subscription



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-08 Thread Ben Goertzel
This is part of the idea underlying OpenCog (opencog.org), though it's
being done
in a nonprofit vein rather than commercially...

On Tue, Apr 8, 2008 at 1:55 AM, John G. Rose <[EMAIL PROTECTED]> wrote:
> Just a thought, maybe there are some commonalities across AGI designs where
>  components could be built at a lower cost. An investor invests in the
>  company that builds component x that is used by multiple AGI projects. Then
>  you have your little AGI ecosystem of companies all competing yet
>  cooperating. After all, we need to get the Singularity going ASAP so that we
>  can upload before inevitable biologic death? I prefer not to become
>  nano-dust I'd rather keep this show a rockin' capiche?
>
>  So it's like this - need standards. Somebody go bust out an RFC. Or is there
>  work done on this already like is there a CogML? I don't know if the
>  Semantic Web is going to cut the mustard... and the name "Semantic Web" just
>  doesn't have that ring to it. Kinda reminds me of the MBone - names really
>  do matter. Then who's the numnutz that came up with "Web 3 dot oh" geezss!
>
>  John
>
>
>
>  > -Original Message-
>  > From: Matt Mahoney [mailto:[EMAIL PROTECTED]
>  > Sent: Monday, April 07, 2008 7:07 PM
>  > To: singularity@v2.listbox.com
>
>
> > Subject: Re: [singularity] Vista/AGI
>  >
>  > Perhaps the difficulty in finding investors in AGI is that among people
>  > most
>  > familiar with the technology (the people on this list and the AGI list),
>  > everyone has a different idea on how to solve the problem.  "Why would I
>  > invest in someone else's idea when clearly my idea is better?"
>  >
>  >
>  > -- Matt Mahoney, [EMAIL PROTECTED]
>  >
>
>  ---
>  singularity
>  Archives: http://www.listbox.com/member/archive/11983/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
Funny dispute ... "is AGI about mathematics or science"

I would guess there are some approaches to AGI that are only minimally
mathematical in their design concepts (though of course math could be
used to explain their behavior)

Then there are some approaches, like Novamente, that mix mathematics
with less rigorous ideas in an integrative design...

And then there are more purely mathematical approaches -- I haven't
seen any that are well enough fleshed and constitute pragmatic AGI
designs... but I can't deny the possibility

I wonder why some people think there is "one true path" to AGI ... I
strongly suspect there are many...

-- Ben


On Sun, Apr 6, 2008 at 9:16 PM, J. Andrew Rogers
<[EMAIL PROTECTED]> wrote:
>
>  On Apr 6, 2008, at 4:46 PM, Richard Loosemore wrote:
>
>
> > J. Andrew Rogers wrote:
> >
> > > The fact that the vast majority of AGI theory is pulled out of /dev/ass
> notwithstanding, your above characterization would appear to reflect your
> limitations which you have chosen to project onto the broader field of AGI
> research.  Just because most AI researchers are misguided fools and you do
> not fully understand all the relevant theory does not imply that this is a
> universal (even if it were).
> > >
> >
> > Ad hominem.  Shameful.
> >
>
>
>  Ad hominem?  Well, of sorts I suppose, but in this case it is the substance
> of the argument so it is a reasonable device.  I think I have met more AI
> cranks with hare-brained pet obsessions with respect to the topic or
> academics that are beating a horse that died thirty years ago than AI
> researchers that are actually keeping current with the subject matter.
> Pointing out the embarrassing foolishness of the vast number of those that
> claim to be "AI researchers" and how it colors the credibility of the entire
> field is germane to the discussion.
>
>  As for you specifically, assertions like "Artificial Intelligence research
> does not have a credible science behind it" in the absence of substantive
> support (now or in the past) can only lead me to believe that you either are
> ignorant of relevant literature (possible) or you do not understand all the
> relevant literature and simply assume it is not important.   As far as I
> have ever been able to tell, theoretical psychology re-heats a very old idea
> while essentially ignoring or dismissing out of hand more recent literature
> that could provide considerable context when (re-)evaluating the notion.
> This is a fine example of part of the problem we are talking about.
>
>
>
> > AGI *is* mathematics?
> >
>
>
>  Yes, applied mathematics.  Is there some other kind of non-computational
> AI?  The mathematical nature of the problem does not disappear when you wrap
> it in fuzzy abstractions it just gets, well, fuzzy.  At best the science can
> inform your mathematical model, but in this case the relevant mathematics is
> ahead of the science for most purposes and the relevant science is largely
> working out the specific badly implemented wetware mapping to said
> mathematics.
>
>
>
>
> > I'm sorry, but if you can make a statement such as this, and if you are
> already starting to reply to points of debate by resorting to ad hominems,
> then it would be a waste of my time to engage.
> >
>
>
>  Probably a waste of my time as well if you think this is primarily a
> science problem in the absence of a discernible reason to characterize it as
> such.
>
>
>
>
> > I will just note that if this point of view is at all widespread - if
> there really are large numbers of people who agree that "AGI is mathematics,
> not science"  -  then this is a perfect illustration of just why no progress
> is being made in the field.
> >
>
>
>  Assertions do not manufacture fact.
>
>
>  J. Andrew Rogers
>
>  ---
>  singularity
>  Archives: http://www.listbox.com/member/archive/11983/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
>
>  Modify Your Subscription:
> http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
On Sun, Apr 6, 2008 at 4:42 PM, Derek Zahn <[EMAIL PROTECTED]> wrote:
>
>
>  I would think an investor would want a believable specific answer to the
> following question:
>
>  "When and how will I get my money back?"
>
>  It can be uncertain (risk is part of the game), but you can't just wave
> your hands around on that point.

This is not the problem ... regarding Novamente, we have an extremely
specific business plan and details regarding how we would provide return
on investment.

The problem is that investors are generally pretty unwilling to eat  perceived
technology risk.  Exceptions arise all the time, and AGI has not yet been one.

It is an illusion that VC or angel investors are fond of risk ...
actually they are
quite risk-averse in nearly all cases...

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
>  I know personally (and have met with) a number of folks who
>
>  -- could invest a couple million $$ in NM without it impacting their
>  lives at all
>
>  -- are deeply into the Singularity and AGI and related concepts
>
>  -- appear to personally like and respect me and other in the NM team
>
>  But, after spending about 1.5 years courting these sorts of folks,
>  Bruce and I largely
>  gave up and decided to focus on other avenues.

Just to be clear: these individuals have not funded any other AI projects
either ... so, it's not a matter of them disliking some particulars of the NM
project or the team as compared to others...

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
On Sun, Apr 6, 2008 at 12:21 PM, Eric B. Ramsay <[EMAIL PROTECTED]> wrote:
> Ben:
> I may be mistaken, but it seems to me that AGI today in 2008 is "in the air"
> again after 50 years.

Yes

>You are not trying to present a completely novel and
> unheard of idea and with today's crowd of sophisticated angel investors I am
> surprised that no one bites given the modest sums involved. BTW I was not
> trying to give needless advice, just finishing my thoughts. I already took
> it as a given that you look for funding. I am trying to understand why no
> one bites. It's not as if there are a hundred different AGI efforts out
> there to choose from.

I don't fully understand it myself, but it's a fact.

To be clear: I understand why VC's and big companies don't want to fund
NM.

VC's are in a different sort of business ...

and big companies are either focused
on the short term, or else have their own
research groups who don't want a bunch of upstart outsiders to get
their research
$$ ...

But what vexes me a bit is that none of the many wealthy futurists out
there have been
interested in funding NM extensively, either on an angel investment
basis, or on a
pure nonprofit donation basis (and we have considered doing NM as a nonprofit
before, though right now that's not our focus as the virtual-pets biz
opp seems so
grand...)

I know personally (and have met with) a number of folks who

-- could invest a couple million $$ in NM without it impacting their
lives at all

-- are deeply into the Singularity and AGI and related concepts

-- appear to personally like and respect me and other in the NM team

But, after spending about 1.5 years courting these sorts of folks,
Bruce and I largely
gave up and decided to focus on other avenues.

I have some psychocultural theories as to why things are this way, but
nothing too
solid...

>I am surprised that the reason may only be that the
> project isn't far enough along (too immature) given the historical
> precedents of what investors have ponied up money for before.

That's surely part of it ... but investors have put big $$ into much LESS
mature projects in areas such as nanotech and quantum computing.

AGI arouses an irrational amount of skepticism, compared to these other
futurist technologies, it seems to me.  I suppose this partly is
because there have
been more "false starts" toward AI in the past.

-- Ben

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
> If the concept behind Novamente is truly compelling enough, it
> should be no problem to make a successful pitch.
>
> Eric B. Ramsay

Gee ... you mean, I could pitch the idea of funding Novamente to
people with money??  I never thought of that!!  Thanks for the
advice ;-pp

Evidently, the concept behind Novamente is not "truly compelling
enough" to the casual observer,
as we have failed to attract big-bucks backers so far...

Many folks we've talked to are interested in what we're doing but
it seems we'll have to get further toward the end goal in order to
overcome their AGI skepticism...

Part of the issue is that the concepts underlying NM are both
complex and subtle, not lending themselves all that well to
"elevator pitch" treatment ... or even "PPT summary" treatment
(though there are summaries in both PPT and conference-paper
form).

If you think that's a mark against NM, consider this: What's your
elevator-pitch description of how the human brain works?  How
about the human body?  Businesspeople favor the simplistic, yet
the engineering of complex cognitive systems doesn't match well
with this bias

Please note that many successful inventors in history have had
huge trouble getting financial backing, although in hindsight
we find their ideas "truly compelling."  (And, many failed inventors
with terrible ideas have also had huge trouble getting financial
backing...)

-- Ben G

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


Re: [singularity] Vista/AGI

2008-04-06 Thread Ben Goertzel
Much of this discussion is very abstract, which is I guess how you think about
these issues when you don't have a specific AGI design in mind.

My view is a little different.

If the Novamente design is basically correct, there's no way it can possibly
take thousands or hundreds of programmers to implement it.  The most I
can imagine throwing at it would be a couple dozen, and I think 10-20 is
the right number.

So if the Novamente design is basically correct, it's would take a team of
10-20 programmers a period of 3-10 years to get to human-level AGI.

Sadly, we do not have 10-20 dedicated programmers working on Novamente
(or associated OpenCog) AGI right now, but rather fractions of various peoples'
time (as Novamente LLC is working mainly on various commercial projects
that pay our salaries).  So my point is not to make a projection regarding our
progress (that depends too much on funding levels), just to address this issue
of ideal team size that has come up yet again...

Even if my timing estimates are optimistic and it were to take 15 years, even
so, a team of thousands isn't gonna help things any.

If I had a billion dollars and the passion to use it to advance AGI, I would
throw amounts between $1M and $50M at various specific projects, I
wouldn't try to make one monolithic project.

This is based on my bias that AGI is best approached, at the current time,
by focusing on software not specialized hardware.

One of the things I like about AGI is that a single individual or a
small team CAN
"just do it" without need for massive capital investment in physical
infrastructure.

It's tempting to get into specialized hardware for AGI, and we may
want to at some
point, but I think it makes sense to defer that until we have a very
clear idea of
exactly what AGI design needs the hardware and strong prototype results of some
sort indicating why this AGI design will work on this hardware.  My
suspicion is that
we can get to human-level AGI without any special hardware, though
special hardware
will certainly be able to accelerate things after that.

-- Ben G




On Sun, Apr 6, 2008 at 7:22 AM, Samantha Atkins <[EMAIL PROTECTED]> wrote:
> Arguably many of the problems of Vista including its legendary slippages
> were the direct result of having thousands of merely human programmers
> involved.   That complex monkey interaction is enough to kill almost
> anything interesting. 
>
>  - samantha
>
>  Panu Horsmalahti wrote:
>
> >
> > Just because it takes thousands of programmers to create something as
> complex as Vista, does *not* mean that thousands of programmers are required
> to build an AGI, since one property of AGI is/can be that it will learn most
> of its complexity using algorithms programmed into it.
> > 
> > *singularity* | Archives
> <http://www.listbox.com/member/archive/11983/=now>
> <http://www.listbox.com/member/archive/rss/11983/> | Modify
> <http://www.listbox.com/member/?&;> Your Subscription   [Powered by
> Listbox] <http://www.listbox.com>
> >
> >
>
>
>  ---
>  singularity
>  Archives: http://www.listbox.com/member/archive/11983/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
>  Modify Your Subscription:
> http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


[singularity] Microsoft Launches Singularity

2008-03-24 Thread Ben Goertzel
 http://www.codeplex.com/singularity

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=98631122-712fa4
Powered by Listbox: http://www.listbox.com


[singularity] Brief report on AGI-08

2008-03-08 Thread Ben Goertzel
any AI academics to
come to a mildly out-of-the-mainstream conference on AGI.  Society,
including the society of scientists, is starting to wake up to the
notion that, given modern technology and science, human-level AGI is
no longer a pipe dream but a potential near-term reality.  w00t!  Of
course there is a long way to go in terms of getting this kind of work
taken as seriously as it should be, but at least things seem to be
going in the right direction.

-- Ben




-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: [singularity] Definitions

2008-02-22 Thread Ben Goertzel
"Consciousness", like many natural language terms,
is extremely polysemous

A formal definition of "reflective consciousness"
was given by me in a blog post a few days ago

http://goertzel.org/blog/blog.htm

-- Ben G

On Mon, Feb 18, 2008 at 3:37 PM, John K Clark <[EMAIL PROTECTED]> wrote:
> "Richard Loosemore" <[EMAIL PROTECTED]>
>
>
>  > it is exactly the lack of a clear definition
>  > of what "consciousness" is supposed to be
>
>  And if we did have such a definition of consciousness I don't see how it
>  would help in the slightest in making an AI. The definition would be made
>  of words, and every one of those words would have their own definition
>  also made of words, and every one of those words would have their own
>  definition also made of words, and [...]
>
>  You get the idea, round and round we go. The thing that gets language
>  out of this endless loop is examples; we can point to a word and
>  something in the real world and say "this word means that".
>
>  And I have no difficulty explaining what I mean when my mouth makes
>  the sound "consciousness"; producing consciousness is, in my opinion
>  and almost certainly yours, the most importing thing I am doing at this
>  instant. I have no definition but I know exactly what those words mean
>  and I'll bet you do too. What more is needed for clear communication?
>
>   John K Clark
>
>
>
>
>
>
>  ---
>  singularity
>  Archives: http://www.listbox.com/member/archive/11983/=now
>  RSS Feed: http://www.listbox.com/member/archive/rss/11983/
>  Modify Your Subscription: http://www.listbox.com/member/?&;
>  Powered by Listbox: http://www.listbox.com
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

---
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=96140713-a54b2b
Powered by Listbox: http://www.listbox.com


Re: Re : Re : Re : Re : [singularity] Quantum resonance btw DNA strands?

2008-02-05 Thread Ben Goertzel
Hi Bruno,

> effectively,my commentary is very short so excuse-my(i drive my pc with my
> eyes
> because i am a a.l.s with tracheo and gastro and i was a speaker,not a
> writer and it's difficult)

Well that is certainly a good reason for your commentaries being short!

> hello ben
> ok ,i stop,no problem
> i am thinking mcfadden'theory was possible right because of
> wave-matter-structure and
> no-particle-matter-structure

Certainly the wave nature of matter is a necessary prerequisite for
McFadden's theory to be correct -- but that's already built into quantum
mechanics, right?

The question is whether proteins really function as macroscopic quantum
systems, in the way that McFadden suggests.  They may or may not, but I
don't think the answer is obvious from the wave nature of matter...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=94096411-d20c12


Re: Re : Re : Re : [singularity] Quantum resonance btw DNA strands?

2008-02-05 Thread Ben Goertzel
Bruno,

Posting these links without any comprehensible commentary is not very
useful ... so I think you should stop ...

If you have some discussion about the information being pointed to,
and its relevance to this thread or other possibly
Singularity-relevant issues, that would be welcome...

thanks
Ben Goertzel
List Owner

On Feb 5, 2008 4:36 PM, Bruno Frandemiche <[EMAIL PROTECTED]> wrote:
>
> hello,too me(stop me if you have the thue,i am very open)
> http://www.spaceandmotion.com/wave-structure-matter-theorists.htm
> cordialement votre
> bruno
>
>
> - Message d'origine 
> De : Bruno Frandemiche <[EMAIL PROTECTED]>
> À : singularity@v2.listbox.com
> Envoyé le : Mardi, 5 Février 2008, 21h42mn 07s
> Objet : Re : Re : [singularity] Quantum resonance btw DNA strands?
>
>
>
>
> hell-o
> http://freespace.virgin.net/ch.thompson1/
> inquiry,reflexion,judgement:yes
> heating knowledge:no
> the true is always subjectif,contextuel or intersubjectif and therefore
> social
> cordialement votre
> bruno
>
>
>
> - Message d'origine 
> De : Bruno Frandemiche <[EMAIL PROTECTED]>
> À : singularity@v2.listbox.com
> Envoyé le : Mardi, 5 Février 2008, 20h52mn 03s
> Objet : Re : [singularity] Quantum resonance btw DNA strands?
>
>
>
> hello (i am a poor little computer-man but honest and i want to know before
> out)
> http://www.glafreniere.com/matter.htm
> ether:yes
> wave:yes
> lorentz:yes
> poincaré:yes
> compton:yes
> cabala:yes
> lafreniere:yes
> http://en.wikipedia.org/wiki/Process_Physics
> http://myprofile.cos.com/mammoth
> http://web.petrsu.ru/~alexk/
> cahill:yes
> kirilyuk:yes
> kaivarainen:yes
> particule:no
> eiinstein:no (excuse-my)(or excuse his)
> fuller:yes
> synergetics:yes
> darwin:little
> symbiose(wave and evolution):YES YES YES YES
> mcfadden:possible(because wave)
> bohr:little(because epistemic)(excuse-my)
> heisenberg:no(excuse-my)
> schrodinger:yes(but no particle and ether)
> descarte:yes(i am french but i feel non-dual dual rationalism)
> agi:yes(attention for worker)
> good french polemic
> cordialement votre
> bruno
>
> - Message d'origine 
> De : Ben Goertzel <[EMAIL PROTECTED]>
> À : singularity@v2.listbox.com
> Envoyé le : Mardi, 5 Février 2008, 17h32mn 47s
> Objet : [singularity] Quantum resonance btw DNA strands?
>
> This article
>
> http://www.physorg.com/news120735315.html
>
> made me think of Johnjoe McFadden's theory
> that quantum nonlocality plays a role in protein-folding
>
> http://www.surrey.ac.uk/qe/quantumevolution.htm
>
> H...
>
> ben
>
>
>
> --
> Ben Goertzel, PhD
> CEO, Novamente LLC and Biomind LLC
> Director of Research, SIAI
> [EMAIL PROTECTED]
>
> "If men cease to believe that they will one day become gods then they
> will surely become worms."
> -- Henry Miller
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
>
>  
>  Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo!
> Mail This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
>
>  
>  Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo!
> Mail This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>
>
>  
>  Ne gardez plus qu'une seule adresse mail ! Copiez vos mails vers Yahoo!
> Mail 
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=93974413-c8d738

[singularity] Quantum resonance btw DNA strands?

2008-02-05 Thread Ben Goertzel
This article

http://www.physorg.com/news120735315.html

made me think of Johnjoe McFadden's theory
that quantum nonlocality plays a role in protein-folding

http://www.surrey.ac.uk/qe/quantumevolution.htm

H...

ben



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=93804562-c9b06c


Re: [singularity] Multi-Multi-....-Multiverse

2008-02-02 Thread Ben Goertzel
Hi,

Just a contextualizing note: this is the Singularity list not the AGI list so
the scope of appropriate discussion is not so restricted.

In my view, whacky models of the universe are at least moderately
relevant to Singularity.  After the Singularity, we are almost sure to discover
that our current model of the universe is in many ways wrong ... it seems
interesting to me to speculate about what a broader, richer, deeper model
might look like

-- Ben Goertzel
(list owner, plus the guy who started this thread ;-)

On Feb 2, 2008 3:54 AM, Samantha Atkins <[EMAIL PROTECTED]> wrote:
> WTF does this have to do with AGI or Singularity?   I hope the AGI
> gets here soon.  We Stupid Monkeys get damn tiresome.
>
> - samantha
>
>
> On Jan 29, 2008, at 7:06 AM, gifting wrote:
>
> >
> > On 29 Jan 2008, at 14:13, Vladimir Nesov wrote:
> >
> >> On Jan 29, 2008 11:49 AM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> >>>> OK, but why can't they all be dumped in a single 'normal'
> >>>> multiverse?
> >>>> If traveling between them is accommodated by 'decisions', there
> >>>> is a
> >>>> finite number of them for any given time, so it shouldn't pose
> >>>> structural problems.
> >>>
> >>> The whacko, speculative SF hypothesis is that lateral movement btw
> >>> Yverses is conducted according to "ordinary" laws of physics,
> >>> whereas
> >>> vertical movement btw Yverses is conducted via extraphysical psychic
> >>> actions ;-)'
> >>>
> >>
> >> What differentiates "psychic" actions from non-psychic so that they
> >> can't be considered "ordinary"? If I can do both, why aren't they
> >> both
> >> equally ordinary to me (and everyone else)?..
> >
> > Is a psychic action telepathy, for example? If I am a schizophrenic
> > and hear voices, is this a psychic experience?
> > What is a psychic action FOR YOU, or in your set of definitions?
> > Do you propose that you are able of psychic actions within a set
> > frame of definitions or do you experience psychic actions and
> > redefine your environment because
> > of this?
> > Or is it all in the mind?
> > Isn't it only ordinary, if experienced repetitively .
> > Gudrun
> >>
> >> -- Vladimir Nesov
> >> mailto:[EMAIL PROTECTED]
> >>
> >> -
> >> This list is sponsored by AGIRI: http://www.agiri.org/email
> >> To unsubscribe or change your options, please go to:
> >> http://v2.listbox.com/member/?&-3ffb4f
> >>
> >
> > -
> > This list is sponsored by AGIRI: http://www.agiri.org/email
> > To unsubscribe or change your options, please go to:
> > http://v2.listbox.com/member/?&;
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=92990369-76f3f1


Re: [singularity] Multi-Multi-....-Multiverse

2008-01-29 Thread Ben Goertzel
> OK, but why can't they all be dumped in a single 'normal' multiverse?
> If traveling between them is accommodated by 'decisions', there is a
> finite number of them for any given time, so it shouldn't pose
> structural problems.

The whacko, speculative SF hypothesis is that lateral movement btw
Yverses is conducted according to "ordinary" laws of physics, whereas
vertical movement btw Yverses is conducted via extraphysical psychic
actions ;-)'

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90975788-c6f349


Re: [singularity] Multi-Multi-....-Multiverse

2008-01-28 Thread Ben Goertzel
Can you define what you mean by "decision" more precisely, please?


> OK, but why can't they all be dumped in a single 'normal' multiverse?
> If traveling between them is accommodated by 'decisions', there is a
> finite number of them for any given time, so it shouldn't pose
> structural problems. Another question is that it might be useful to
> describe them as organized in a tree-like structure, according to
> navigation methods accessible to an agent. If you represent
> uncertainty by being in 'more-parent' multiverse, it expresses usual
> idea with unusual (and probably unnecessarily restricting) notation.
>
> --
> Vladimir Nesovmailto:[EMAIL PROTECTED]
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90503257-2c3931


Re: [singularity] Multi-Multi-....-Multiverse

2008-01-27 Thread Ben Goertzel
Nesov wrote:
>  Exactly. It needs stressing that probability is a tool for
>  decision-making and it has no semantics when no decision enters the
>  picture.
...
> What's it good for if it can't be used (= advance knowledge)? For
> other purposes we'd be better off with specially designed random
> number generators. So it's more like tautology that anything useful
> influences decisions.


In another context, I might not be picky about the use of the word
"decision" here ... but this thread started with a discussion of radical
models of the universe involving multi-multiverses and Yverses
and so on.

In this context, casual usage of folk-psychology notions like "decision"
isn't really appropriate, I suggest.

The idea of "decision" seems wrapped up with "free will", which has a pretty
tenuous relationship with physical reality.

If what you mean is that probabilities of events are associated with the
actions that agents take, then of course this is true.

The (extremely) speculative hypothesis I was proposing in my blog post
is that perhaps intelligent agents can take two kinds of actions -- those
that are lateral moves within a given multiverse, and those that pop out
of one multiverse into another (surfing through the Yverse to another
multiverse).

One could then talk about conditional probabilities of agent actions ...
which seems unproblematic ...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90464629-d2f914


Re: [singularity] Multi-Multi-....-Multiverse

2008-01-27 Thread Ben Goertzel
On Jan 27, 2008 5:26 PM, Vladimir Nesov <[EMAIL PROTECTED]> wrote:
> On Jan 27, 2008 9:29 PM, John K Clark <[EMAIL PROTECTED]> wrote:
> > "Ben Goertzel" <[EMAIL PROTECTED]>
> >
> > > we can think about a multi-multiverse, i.e. a collection of multiverses,
> > > with a certain probability distribution over them.
> >
> > A probability distribution of what?
> >
>
> Exactly. It needs stressing that probability is a tool for
> decision-making and it has no semantics when no decision enters the
> picture.

Probability theory is a branch of mathematics and the concept of "decision"
does not enter into it.

Connecting probability to human life or scientific experiments
does involve an interpretation, but not all interpretations involve the
notion of decision.

De Finetti's interpretation involves decisions, for example (as it has to do
with gambling); but, Cox's interpretation does not...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90404327-911f15


Re: [singularity] Wrong focus?

2008-01-27 Thread Ben Goertzel
> Craig
> Venter & co creating a new genome -

Just to be clear: They did not create a new genome, rather they are re-creating
a subset of a previously existing one...

>is an example of the genetic keyboard
> playing on itself, i.e. one genome [Craig Venter] has played with another
> genome and will eventually and inevitably play with itself.

Yes

>Clearly it is in
> the nature of the genome to recreate itself - and not just to execute a
> program.

You lost me here, sorry.  Nothing in Venter's work argues against the
Digital Physics hypothesis, which holds that the whole universe is a giant
computer program of sorts.

> P.P.S. The full new paradigm is something like -  "the self-driving/
> self-conducting machine" -  it is actually the self that is the rest of the
> body and brain, that interactively plays upon, and is played by, the genome,
> (rather than the genome literally playing upon itself). And just as science
> generally has left the self out of its paradigms,

On the contrary, as Thomas Metzinger has masterfully argued in "Being No One"
(and see also the book "The Curse of the Self", whose author's name eludes
me momentarily), the "self" has been well-understood by neuropsychology
as an emergent aspect of the dynamics of certain
complex systems.  Like will and reflective-consciousness, it is an extremely
useful construct that also seems to have some irrational and undesirable
(even from its own point of view) aspects.

> so cog sci has left the
> indispensible human programmer/operator out of its computational paradigms.

It is true that human programmers are indispensible to current software systems,
except for simple self-propagating systems like computer viruses and worms ...
but this is just because software is at an early stage of development, it's not
something intrinsic to the nature of software versus "physical" systems (which
as Fredkin and others have argued,
may sensibly be conceived of as "just software on a different
operating system"...

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90331620-260585


Re: [singularity] Wrong focus?

2008-01-26 Thread Ben Goertzel
Mike,

> I certainly would like to see discussion of how species generally may be
> artificially altered, (including how brains and therefore intelligence may
> be altered) - and I'm disappointed, more particularly, that Natasha and any
> other transhumanists haven't put forward some half-way reasonable
> possibilities here.  But perhaps Samantha & others would regard such matters
> as offlimits?

I know Samantha well enough to know she would NOT consider this kind
of topic "off limits" ;-)  ... nor would hardly anyone on this list...

My attitude (and I suspect Samantha shares the same general attitude)
is that, while genetic engineering and other aspects of biotech are
extremely interesting, AGI has a lot more potential to radically
transform life and mind.

Yes, genetic engineering is a big deal relative to ordinary life
today.  But compared to transhuman AGI, it's small potatoes...

The main difference you have with this attitude seems to be that you
feel AGI is a remote, implausible notion, whereas we feel it is almost
an inevitability in the medium term, and a possibility even in the
short term


> It's a pity though because I do think that Venter has changed everything
> today - including the paradigms that govern both science and AI.
>

Lets not overblow things -- please note that Venter's team has not yet
synthesized an artificial organism.  Also, they didn't really design
the organism, from scratch, they're just regenerating a (slightly
modified) existing design...

Theirs is great work though, and I don't doubt that it will advance
further in the next years...

But, there is nothing particularly surprising about what Venter's team
has done, it's stuff that we have known to be possible for a while ...
he just managed to cut through some of the practical irritations of
that sort of work and make more rapid progress than others...

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90312741-593a7b


Re: [singularity] Wrong focus?

2008-01-26 Thread Ben Goertzel
Hi,

> Why does discussion never (unless I've missed something - in which case
> apologies) focus on the more realistic future "threats"/possibilities -
> future artificial species as opposed to future computer simulations?

While I don't agree that AGI is less realistic than artificial
biological species,
I agree the latter are also interesting.

What do you have to say about them, though?  ;-)

One thing that seems clear to me is that engineering artificial pathogens
is an easier problem than engineering artificial antibodies.

The reason biowarfare has failed so far is mostly a lack of good delivery
mechanisms: there are loads of pathogens that will kill people, but no one
has yet figured out how to deliver them effectively ... they die in the sun,
disperse in the wind, drown in the water, whatever

If advanced genetic engineering solves these problems, then what happens?
Are we totally screwed?

Or will we be protected by the same sociopsychological dynamics that have
kept DC from being nuked so far: the intersection of folks with a terrorist
mindset and folks with scientific chops is surprisingly teeny...

Thoughts?

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90224593-1b6491


[singularity] Multi-Multi-....-Multiverse

2008-01-25 Thread Ben Goertzel
Fans of extremely weird and silly speculative pseudo-science ideas may
appreciate my latest blog post, which posits a new
model of the universe ;-_)

http://www.goertzel.org/blog/blog.htm

(A... after a day spent largely on various business-
related hassles, the 30 minutes spent writing that
was really refreshing!!!)

ben



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]

"If men cease to believe that they will one day become gods then they
will surely become worms."
-- Henry Miller

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=90160582-7ccb62


Re: [singularity] The Extropian Creed by Ben

2008-01-21 Thread Ben Goertzel
t female philosopher of transhumanism."  Calling me a wife, however
> complimentary, is degrading when you are writing about a philosophy that I
> hold dear.  Further I was president of Extropy Institute for a number of
> years, and reducing me to "wife" position is belittling.
>
>  "... and his wife Natasha ..."  Once again, the wifey-poo description.
>
>  After writing these comments, I went to my bookshelf and pulled down the
> book I wrote in the 1990s Create/Recreate: The 3rd Millennial Culture about
> Extropy and transhumanist culture.  I skimmed though more than a dozen of
> the collection of essays and was reminded about one core value of extropy --
> that of practical optimism.  I also was reminded that the underlying concern
> expressed in each essay was/is a desire to see transhumanism work to help
> solve the many hardships of humanity – everywhere.
>
>  Thank you Ben.  Best wishes,
>
>  Natasha
>
>
>
>  Natasha Vita-More PhD Candidate,  Planetary Collegium - CAiiA, situated in
> the Faculty of Technology, School of Computing, Communications and
> Electronics, University of Plymouth, UK Transhumanist Arts & Culture
> Thinking About the Future
>
>  If you draw a circle in the sand and study only what's inside the circle,
> then that is a closed-system perspective. If you study what is inside the
> circle and everything outside the circle, then that is an open system
> perspective. - Buckminster Fuller
>
>
>  
>  This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


"We are on the edge of change comparable to the rise of human life on Earth."
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=88116765-371fc5

Re: [singularity] The Extropian Creed by Ben

2008-01-20 Thread Ben Goertzel
On Jan 20, 2008 1:54 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Hi Natasha
>
> After discussions with you and others in 2005, I created a revised
> version of the essay,
> which may not address all your complaints, but hopefully addressed some of 
> them.
>
> http://www.goertzel.org/Chapter12_aug16_05.pdf
>
> However I would be quite interested in further critiques of the 2005
> version, because
> the book in which is was published is going to be reissued in 2008 and
> my coauthor
> and I are planning to rework the chapter anyway.
>
> thanks
> Ben

I would add that my understanding of the transhumanist/futurist
community in general,
and extropianism in particular, has deepened since 2005 due to a
greater frequency
and intensity of social interaction with relevant individuals; so
there are probably statements
in even the 2005 version that I wouldn't fully agree with now ...

... though, the spirit of the article of course still represents my
perspective...

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=87922432-9d71fc


Re: [singularity] The Extropian Creed by Ben

2008-01-20 Thread Ben Goertzel
Hi Natasha

After discussions with you and others in 2005, I created a revised
version of the essay,
which may not address all your complaints, but hopefully addressed some of them.

http://www.goertzel.org/Chapter12_aug16_05.pdf

However I would be quite interested in further critiques of the 2005
version, because
the book in which is was published is going to be reissued in 2008 and
my coauthor
and I are planning to rework the chapter anyway.

thanks
Ben

On Jan 20, 2008 1:51 PM, Natasha Vita-More <[EMAIL PROTECTED]> wrote:
>
>  At 06:06 AM 1/20/2008, Mike Tintner wrote:
>
>
> Sorry if you've all read this:
>
>  http://www.goertzel.org/benzine/extropians.htm
>
>  But I found it a v. well written sympathetic critique of extropianism &
> highly recommend it. What do people think of its call for a "humanist
> transhumanism"?
>  I found Ben's essay to contain a certain bias which detracts from its
> substance.  If Ben would like to debate key assumptions his essay claims, I
> available. Otherwise, if anyone is interested in key points which I belive
> are narrowly-focused and/or misleading, I'll post them.
>
>  Natasha
>
>  Natasha Vita-More PhD Candidate,  Planetary Collegium - CAiiA, situated in
> the Faculty of Technology, School of Computing, Communications and
> Electronics, University of Plymouth, UK Transhumanist Arts & Culture
> Thinking About the Future
>
>  If you draw a circle in the sand and study only what's inside the circle,
> then that is a closed-system perspective. If you study what is inside the
> circle and everything outside the circle, then that is an open system
> perspective. - Buckminster Fuller
>
>
>  
>  This list is sponsored by AGIRI: http://www.agiri.org/email
>
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


"We are on the edge of change comparable to the rise of human life on Earth."
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=87922044-bb741d


Re: [singularity] The Extropian Creed by Ben

2008-01-20 Thread Ben Goertzel
Hi,

FYI, that essay was an article I wrote for the German newspaper
Frankfurter Allgemaine Zeitung in 2001 ... it was translated to
German and published...

An elaborated, somewhat modified version was included
as a chapter in the 2005 book The Path to Posthumanity (P2P) by
myself and Stephan Vladimir Bugaj.   I have uploaded
the P2P version of the chapter here:

http://www.goertzel.org/Chapter12_aug16_05.pdf

BTW that book will in 2008 be updated and re-issued with
a different title.

Ben

On Jan 20, 2008 7:06 AM, Mike Tintner <[EMAIL PROTECTED]> wrote:
> Sorry if you've all read this:
>
> http://www.goertzel.org/benzine/extropians.htm
>
> But I found it a v. well written sympathetic critique of extropianism &
> highly recommend it. What do people think of its call for a "humanist
> transhumanism"?
>
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;
>



-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


"We are on the edge of change comparable to the rise of human life on Earth."
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=87898088-6dcd8b


[singularity] Japanese gods pray for a positive Singularity

2008-01-19 Thread Ben Goertzel
A frivolous blog post some may find amusing ;-)

http://www.goertzel.org/blog/blog.htm

ben


-- 
Ben Goertzel, PhD
CEO, Novamente LLC and Biomind LLC
Director of Research, SIAI
[EMAIL PROTECTED]


"We are on the edge of change comparable to the rise of human life on Earth."
-- Vernor Vinge

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&id_secret=87836239-49223c


[singularity] Candide and the Singularity

2007-03-26 Thread Ben Goertzel


My son Zeb read Candide by Voltaire, and was taken by the idea that this 
is the best of all possible worlds.


He has applied this to AGI and the Singularity, in the following passage 
from a SF story he wrote last week:


"
Out of the factory, designed with the sole purpose of generating such 
things, came the first of this type of 'MAN'.


You see, it was a 'NOT' sort of 'MAN'. ...

The factory released a robot, not a 'Man child', not a 'son of a gun'.

Its creators spent decades of work designing it, programming it, and 
building it, so that it could simulate human intelligence, only with one 
great difference: It wasn't idiotic.


The robot was named Quedice Lagente. Quedice's owner, Pablenjamin 
Gojurtse, loved the show "Que dice La Gente". He named his greatest 
invention after his favorite show. Pablenjamin had died two years ago, 
in his homeland Mexicslovakistan.


Quedice Lagente, being designed to learn infinitely fast, learned all 
immediately with its perfect intuition, then decided to hibernate.


Before he hibernated, the people tried to convince Quedice to make the 
world a better place. They tried reprogramming him, but whenever Quedice 
was stupid enough to want the world better, he wasn't capable of it. The 
people eventually restored Quedice to his original state, and left him 
sitting.


Quedice always had said this: "I cannot make this world a better place. 
All is for the best in this most perfect of all possible worlds. And by 
world I also mean reality, or any existence, dimension, plane, or even 
thing. I cannot make the best any better. You ask for what is impossible."

"

;-)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] The establishment line on AGI

2007-03-19 Thread Ben Goertzel

Shane Legg wrote:
On 3/19/07, *Ben Goertzel* <[EMAIL PROTECTED] 
<mailto:[EMAIL PROTECTED]>> wrote:


conservative incremental steps, the current scientific community
is highly culturally biased against anyone who wants to make
a large leap.  Science has drifted into a cultural configuration that
is obsessed with making incremental progress with a very small
increment size.


I don't think it's because science is against large leaps, the large leaps
are what everybody in science loves. 


The science establishment loves large leaps in hindsight ...

but yet in
foresight, follows a funding pattern (and publication-vetting) that is 
oriented

almost entirely toward small incremental steps, instead.

There are exceptions of course, e.g. Human Genome Project


They (or perhaps I should say we)
want evidence, generally in the form of a proof or experimental 
results that

can be repeated.  Furthermore, the larger the leap the more impressive the
evidence has to be.  Thus, if somebody says that they can build a thinking
machine with general intelligence equal to that of a human but don't have
amazingly strong evidence, nobody much will pay attention.

On the other hand if someone can demonstrate a working system with human
level AGI, they will have no trouble in getting scientific attention 
and respect.



I have a couple responses to this:

1) As I said, large leaps are admired and celebrated in hindsight; but the
pattern of funding and publication-vetting is not at all designed so as to
encourage them prospectively.

2) The need for evidence and substantiation is interpreted in a highly 
subjective
way based on prevailing theoretical paradigms.  For instance, the Human 
Genome
Project was funded with no hard evidence that it would be useful.  
Instead, the leading
scientists sorta fooled the politicians into thinking tremendous 
applications
would follow as soon as the sequencing was done.   And now, not too 
surprisingly,
it is taking loads more funding and time to get much real use out of it 
(because,
as many foresaw, just knowing the gene sequence doesn't tell you that 
much...
it's only a start...).   And of course, string theory is well funded 
within the physics

establishment now, in spite of zero empirical evidence and fairly weak
theoretical evidence.  Other physics approaches with equal or greater 
evidence

are not favored at the moment (ask Juergen ;-)


But the early days of string theory illustrate Point 1 above.
It's often been noted that string theory was originally
developed mainly by men in their 40's.  This is because, given the culture
of the physics establishment and the difficulty of the physics job market,
it was too risky for young pre-tenure faculty to spend their time working
on something so "eccentric"  Now however string theory has become
mainstream (the initial large leap was already made, though in this case
it did not lead to any empirical verification, it was purely a conceptual/
mathematical leap), so that young profs can get away with working on it
without killing their careers.

Similarly, right now, AGI is a somewhat risk career move for AI profs
at the pre-tenure stage.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] The establishment line on AGI

2007-03-19 Thread Ben Goertzel


I don't like to insult US academia too severely, because I feel it's been
one of the most productive intellectual establishments in the history
of the human race.

However, in my 8 years as a professor I did find it frequently
frustrating, and one of the many reasons was the narrow-mindedness
of most academic researchers -- which the experience you cite
exemplifies.

Here is a more extreme example.  I once had a smart young
Chinese colleague who had relocated to the US from mainland
China and had done his PhD thesis on neural networks.  He
had been in the US for 4 years, and done his PhD at a US
university.  His research was fairly interesting, showing how to
more efficiently make a certain class of neural nets learn to
approximate a certain class of nonlinear functions. 


And, he was quite surprised and fascinated when I explained to him that
the word "neural" referred to neurons in the brain.  No one
had ever explained to him what "neural" meant -- he thought it
was just a label for the type of mathematical network his
advisor had told him to study. 


(This was in the early 90's, when neural networks were not
as widely famous yet.)

As undergraduates, most folks who study AI are actually interested
in "AI in the grand sense", along with more narrow-focused, short-term,
practical stuff.  But as part of the process of being taught to become
professional researchers, during grad school and the pre-tenure years
of professordom, one learns to distinguish real science and engineering
from fantastical fluff.

Similarly, even if you start out your bio career interested in life
extension and immortality, you soon learn that this is culturally
unacceptable and if you want to be a real scientist, you need to take
a different approach and, say, spend the next decade of your career
trying to understand everything possible about one particular gene
or protein.

Obviously, science evolved in this way in order to protect itself
against the natural human tendency to self-delusion and collective-
delusion.  But it has swung too far in the conservative, paranoid,
innovation-unfriendly direction!  Even though much historical
progress in science was made via large leaps rather than tiny,
conservative incremental steps, the current scientific community
is highly culturally biased against anyone who wants to make
a large leap.  Science has drifted into a cultural configuration that
is obsessed with making incremental progress with a very small
increment size.

Which of course creates many exciting opportunities for individuals
who are willing to put up with some cultural marginalization and
take larger risks based on intuitive insights (and also have the
perseverance to do the long, tedious legwork to validate their
insights, making their large leaps real rather than just hypothetical
and potential).

-- Ben

Joshua Fox wrote:

Singularitarians often accuse the AI establishment of a certain
close-mindedness. I always suspected that this was the usual biased
accusation of rebels against the old guard.

Although I have nothing new to add, I'd like to give some confirmatory
evidence on this from one who is not involved in AGI research.

When I recently met a PhD in AI from one of the top three programs in
the world, I expected some wonderful nuggets of knowledge on the
future of AGI, if only as speculations, but I just heard the
establishment line as described by Kurzweil et al.: AI will continue
to solve narrow problems, always working in tandem with humans who
will handle important part of the tasks.There is no need, and no
future, for human-level AI. (I responded, by the way, that there is
obviously a demand for human-level intelligence, given the salaries
that we knowledge-workers are paid.)

I was quite surprised, even though I had been prepped for exactly this.

Joshua

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Philanthropy & Singularity

2007-03-18 Thread Ben Goertzel





Why has the singularity and AGI not triggered such an interest? 
Thiel's donations to SIAI seem like the exception which highlights 
the rule.


Salesmanship?  Believability?  Fear of Consequences including 
backlash?  I would suspect it is the right people not being approached 
in the right way mainly.  Who are the folks here who are fund raising 
for different relevant AGI and MNT projects?  What are you 
experiencing?  If there aren't many such folks then that is part of 
the answer.




The issue IMO is that philanthropists do not really believe these are 
near-term issues.


If they have read Kurzweil, they generally will view him as highly 
overoptimistic regarding time scale.


They would rather use their $$ for things that they believe will cause 
more clear, immediately perceivable benefit.


-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] The Shtinkularity

2007-03-11 Thread Ben Goertzel



I like Dicksley Chainsworth, too.  It's always important for your heroes to 
have a worthy adversary.

PJ

  


What struck me about that character was the uncanny resemblance between 
Dick Cheney (whose head, obviously, underlies Dicksley Chainsworth) and 
Steve Martin ... see the resemblance?


There's a cosmic truth lurking there, I just know it...


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] The Shtinkularity

2007-03-11 Thread Ben Goertzel


If you have 2.5 minutes or so to spare, my 13-year-old son Zebulon has 
made another Singularity-focused

mini-movie:

http://www.zebradillo.com/AnimPages/The%20Shtinkularity.html

This one is not as deep as RoboTurtle II, his 14-minute 
Singularity-meets-Elvis epic from a year ago or so ...
but, his animation technique has improved over time, and this one is 
more visually hilarious (the visually

amusing part comes about halfway through...)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Ben Goertzel




AIXI is valueless.

Well, I agree that AIXI provides zero useful practical guidance to those 
of us

working on practical AGI systems.

However, as I clarified in a prior longer post, saying that mathematics 
is valueless
is always a risky proposition.  Statements of this nature have been 
proved wrong
plenty of times in the past, in spite of their apparent sensibleness at 
the time of

utterance...

But I think we have all made our views on this topic rather clear, at 
this point ;-)


Time to "agree to disagree" and move on...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Ben Goertzel



Alas, that was not quite the question at issue...

In the proof of AIXI's ability to solve the IQ test, is AIXI *allowed* 
to go so far as to simulate most of the functionality of a human brain 
in order to acquire its ability?


I am not asking you to make a judgment call on whether or not it would 
do so in practice, I am asking whether the structure of the proof 
allows that possibility to occur, should the contingencies of the 
world oblige it to do so.  (I would also be tempted to question your 
judgment call, here, but I don't want to go that route :-)).


If the proof allows even the possibility that AIXI will do this, then 
AIXI has an homunculus stashed away deep inside it (or at least, it 
has one on call and ready to go when needed).


I only need the possibility that it will do this, and my conclusion 
holds.


So:  clear question.  Does the proof implicitly allow it?

Yeah, if AIXI is given initial knowledge or experiential feedback that 
is in principle adequate for internal reconstruction of simulated humans 
... then its learning algorithm may potentially construct simulated humans.


However, it is not at all clear that, in order to do well on an IQ test, 
AIXI would need to be given enough background data or experiential 
feedback to **enable** accurate simulation of humans


It's not right to way "AIXI has a homunculus on call and ready to go 
when needed." 

Rather, it's right to say "AIXI has the capability to synthesize an 
homunculus if it is given adequate data to infer the properties of one, 
and judges this the best way to approach the problem at hand."


-- Ben G


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-09 Thread Ben Goertzel


I agree that, to compare humans versus AIXI on an IQ test in a fully 
fair way (that tests only intelligence rather than prior knowledge) 
would be hard, because there is no easy way to supply AIXI with the same 
initial knowledge state that the human has. 

Regarding whether AIXI, in order to solve an IQ test, would simulate the 
whole physical universe internally in order to simulate humans and thus 
figure out what a human would say for each question -- I really doubt 
it, actually.  I am very close to certain that simulating a human is NOT 
the simplest possible way to create a software program scoring 100% on 
human-created IQ tests.  So, the Occam prior embodied in AIXI would 
almost surely not cause it to take the strategy you suggest. 


-- Ben

Richard Loosemore wrote:

Ben Goertzel wrote:




Sorry, but I simply do not accept that you can make "do really well 
on a long series of IQ tests" into a computable function without 
getting tangled up in an implicit homuncular trap (i.e. accidentally 
assuming some "real" intelligence in the computable function).


Let me put it this way:  would AIXI, in building an implementation 
of this function, have to make use of a universe (or universe 
simulation) that *implicitly* included intelligences that were 
capable of creating the IQ tests?


So, if there were a question like this in the IQ tests:

"Anna Nicole is to Monica Lewinsky as Madonna is to .."


Richard, perhaps your point is that IQ tests assume certain implicit 
background knowledge.  I stated in my email that AIXI would equal any 
other intelligence starting with the same initial knowledge set  
So, your point is that IQ tests assume an initial knowledge set that 
is part and parcel of human culture.



No, that was not my point at all.

My point was much more subtle than that.

You claim that "AIXI would equal any other intelligence starting with 
the same initial knowledge set".  I am focussing on the "initial 
knowledge set."


So let's compare me, as the other intelligence, with AIXI.  What 
exactly is the "same initial knowledge set" that we are talking about 
here? Just the words I have heard and read in my lifetime?  The words 
that I have heard, read AND spoken in my lifetime?  The sum total of 
my sensory experiences, down at the neuron-firing level?  The sum 
total of my sensory experiences AND my actions, down at the neuron 
firing level? All of the above, but also including the sum total of 
all my internal mental machinery, so as to relate the other fluxes of 
data in a coherent way?  All of the above, but including all the 
cultural information that is stored out there in other minds, in my 
society?  All of the above, but including simulations of all the related


Where, exactly, does AIXI draw the line when it tries to emulate my 
performance on the test?


(I picked that particular example of an IQ test question in order to 
highlight the way that some tests involve a huge amount of information 
that requires understanding other minds .. my goal being to force AIXI 
into having to go a long way to get its information).


And if it does not draw a clear line around what "same initial 
knowledge set" means, but the process is open ended, what is to stop 
the AIXI theorems from implictly assuming that AIXI, if it needs to, 
can simulate my brain and the brains of all the other humans, in its 
attempt to do the optimisation?


What I am asking (non-rhetorically) is a question about how far AIXI 
goes along that path.  Do you know AIXI well enough to say?  My 
understanding (poor though it is) is that it appears to allow itself 
the latitude to go that far if the optimization requires it.


If it *does* allow itself that option, it would be parasitic on human 
intelligence, because it would effectively be simulating one in order 
to deconstruct it and use its knowledge to answer the questions.


Can you say, definitively, that AIXI draws a clear line around the 
meaning of "same initial knowledge set," and does not allow itself the 
option of implicitly simulating entire human minds as part of its 
infinite computation?


Now, I do have a second line of argument in readiness, in case you can 
confirm that it really is strictly limited, but I don't think I need 
to use it.  (In a nutshell, I would go on to say that if it does draw 
such a line, then I dispute that it really can be proved to perform as 
well as I do, because it redefines what "I" am trying to do in such a 
way as to weaken my performance, and then proves that it can perform 
better than *that*).






Richard Loosemore


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-08 Thread Ben Goertzel




Sorry, but I simply do not accept that you can make "do really well on 
a long series of IQ tests" into a computable function without getting 
tangled up in an implicit homuncular trap (i.e. accidentally assuming 
some "real" intelligence in the computable function).


Let me put it this way:  would AIXI, in building an implementation of 
this function, have to make use of a universe (or universe simulation) 
that *implicitly* included intelligences that were capable of creating 
the IQ tests?


So, if there were a question like this in the IQ tests:

"Anna Nicole is to Monica Lewinsky as Madonna is to .."


Richard, perhaps your point is that IQ tests assume certain implicit 
background knowledge.  I stated in my email that AIXI would equal any 
other intelligence starting with the same initial knowledge set  So, 
your point is that IQ tests assume an initial knowledge set that is part 
and parcel of human culture.


One approach would be to use IQ tests that are purely formal and don't 
require specific cultural knowledge.  Test that consist of logic 
puzzles, visual pattern recognition puzzles, etc.  There are plenty of 
IQ test questions like this.


Another approach would be to utilize a form of IQ test that assumes the 
test-taker has access to a particular, standard subset of Wikipedia.  
The questions would then be engineered not to require specific factual 
knowledge about the world, aside from what's available in this subset of 
Wikipedia. 

This way, the initial knowledge base needed to answer the IQ questions 
could be specified as a certain series of bits, which could be fed to 
AIXI as part of its initial knowledge state.


So, I think my point still holds: "Do really well on a long series of IQ 
tests, given an appropriate file of background knowledge" is a goal that 
can be fit into the mathematical framework of AIXI.  And the theorems 
show that AIXI will kick ass at achieving this goal.


Your only way to argue against this, consistently, would be to argue 
that there is no way to create an appropriate file of background 
knowledge to feed AIXI, because passing an IQ test relies on a body of 
knowledge that is intrinsically unformalizable, or at least incredibly 
difficult to formalize.


I really don't believe this is true, but refuting it would require 
detailed analysis of a long series of IQ questions, which sounds like a 
boring way to spend the rest of the afternoon...


-- Ben






Would AIXI have to build a solution by implicitly deconstructing (if 
you see what I mean) the entire real universe, including its real 
human societies and real (intelligent) human beings and real social 
relationships?


If AIXI does a post-hoc deconstruction of some "real" intelligent 
systems as part of building its own "intelligent" function, it is 
parasitic on that intelligence.


You can confirm that it is not parasitic in that way?



Richard Loosemore.




OTOH, Pei Wang has proposed that intelligence should be explicitly 
defined as something roughly like "achieving complex goals given 
limited resources" [not his exact wording].  In this case AIXI would 
not be considered intelligent
But my view is that the natural language concept of intelligence 
actually is just about functionality rather than mechanisms.  We say 
someone is smart because of the problems they can solve, not because 
of our understanding of how they go about solving the problems...


Anyway, the NL notion of "intelligence" is not necessarily any more 
intrinsically meaningful than the NL concepts of "cup" and 
"bowl"  It combines a bunch of deep ideas with some culturally 
relative and anthropomorphic stuff that is not so important...


The notion of intelligence embodied in AIXI is an interesting one, 
which things can be proved about  I don't claim that it exhausts 
the interesting insights contained in the ambiguous and diverse NL 
concept of intelligence...


-- Ben G



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-08 Thread Ben Goertzel




The point I just made cannot be pursued very far, however, because any 
further discussion of it *requires* that someone on the AIXI side 
become more specific about why they believe their definition of 
"intelligent behavior" should be considered coextensive with the 
common sense use of that term.  No such justification is forthcoming, 
so withot it all I can do is rest my case by asking "Why should I 
believe your (re)definition of intelligence?"


Well, actually, the theorems about AIXI work if we define intelligence as

"maximize criterion F"

where F is **any** computable function.  At least that's my reading of 
the theorems...


So, no matter what definition you specify for "intelligence", so long as 
it involves maximizing some computable function, the AIXI theorems will 
apply, and the conclusion will be that AIXI is maximally intelligent 
according to the definition.


The question, then, is whether maximization of some computable function 
is a reasonable definition of "intelligence."


It seems clear that any IQ test ever given to humans **does** fit nicely 
into this framework.  For instance, "do really well on a long series of 
IQ tests" would be a definition of intelligence fitting into the 
assumptions of the AIXI theorems.  AIXI, given the series of IQ tests, 
would gradually learn how to do well on the IQ tests --- consuming a lot 
of resources in the process, but doing at least as well as any other 
system would, assuming equivalent initial states of knowledge.


OTOH, Pei Wang has proposed that intelligence should be explicitly 
defined as something roughly like "achieving complex goals given limited 
resources" [not his exact wording].  In this case AIXI would not be 
considered intelligent 

But my view is that the natural language concept of intelligence 
actually is just about functionality rather than mechanisms.  We say 
someone is smart because of the problems they can solve, not because of 
our understanding of how they go about solving the problems...


Anyway, the NL notion of "intelligence" is not necessarily any more 
intrinsically meaningful than the NL concepts of "cup" and "bowl"  
It combines a bunch of deep ideas with some culturally relative and 
anthropomorphic stuff that is not so important...


The notion of intelligence embodied in AIXI is an interesting one, which 
things can be proved about  I don't claim that it exhausts the 
interesting insights contained in the ambiguous and diverse NL concept 
of intelligence...


-- Ben G



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-08 Thread Ben Goertzel

Shane Legg wrote:


On 3/8/07, *Ben Goertzel* <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> 
wrote:
 


using AIXI-type ideas.  The problem is that there is nothing,
conceptually,  in the whole army of ideas surrounding AIXI,
that tells you about how to deal with the challenges of finite
computational resources.  (And my view is that dealing with
these challenges is actually the crux of the AGI problem.)


Yes, indeed I have asked Marcus Hutter about this and his feeling
was that real AGI may well turn out to be large and complex, even
if the theory behind it isn't too bad.  For example, conceptually a
database is pretty simple, but actually making an efficient reliable
database that can scale to huge data volumes is very complex and
takes many many years of work to get right.

And, I think that the theory underlying a real AGI is going to be more
complex than the theory of AIXI **or** the theory underlying
relational databases

I spent some time late last year articulating a set of 17 mathematical/
theoretical propositions, the proof of which would go a long way toward
mathematically justifying the Novamente AGI design.  (I didn't make
the propositions fully rigorous, but got halfway there and made them
semi-rigorous, then got distracted by more practical stuff.)

The propositions came out not having much to do with AIXI type
theory, and more to do with (for example) the actual statistical
properties of realistic-scale program spaces induced by the biases
of particular search algorithms (using "search" very generally)
interacting with particular sorts of environments.

Rigorously formulating and proving these propositions would be a lot of fun
for me, but there are more pressing tasks in the Novamente project
at the moment...

But my point is: I don't think that "AGI theory" is intrinsically
hopeless ... but I think that the sort of theory you need to grapple with
realistic-resources AGI is a bit different from the sort you need to grapple
with near-infinite-resources AGI.  (Though there are certainly 
commonalities,

e.g. the language of probabilities, theoretical computing machines and
program spaces...)

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-08 Thread Ben Goertzel

Shane Legg wrote:

:-)

No offence taken, I was just curious to know what your position was.

I can certainly understand people with a practical interest not having
time for things like AIXI.  Indeed as I've said before, my PhD is in AIXI
and related stuff, and yet my own AGI project is based on other things.
So even I am skeptical about whether it will lead to practical methods.
That said, I can see that AIXI does have some fairly theoretical uses,
perhaps Friendliness will turn out to be one of them?

Well, I could see AIXI type methods being used to provide an
"impossibility proof" for some types of Friendly AI.

I.e., one might possibly be able to show something like "Even with
near-infinite resources, achieving FAI according to definition
Friendliness_17 is not possible."

I am more skeptical about a positive proof of the achievability of
Friendly AI using realistic computational resources, being do-able
using AIXI-type ideas.  The problem is that there is nothing,
conceptually,  in the whole army of ideas surrounding AIXI,
that tells you about how to deal with the challenges of finite
computational resources.  (And my view is that dealing with
these challenges is actually the crux of the AGI problem.)

-- Ben G


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] Apology to the list, and a more serious commentary on AIXI

2007-03-08 Thread Ben Goertzel
 it)
-- Occam's Razor (preferring the simpler explanation)

These two aspects are key to AIXI and I also think they
are central to achieving AGI using realistic resources.

However, in a realistic-resources AGI, these aspects
have to be tangled up with a lot of other aspects to be useful.
And they may be achieved indirectly due to other
principles.  In Novamente they are in fact included
explicitly.  But in the human brain, I suspect that

-- approximate probabilistic reasoning is a consequence
of Hebbian learning

-- Occam's Razor is a consequence of energy minimization,
a major design principle in the brain

Still, in spite of these high-level commonalities between AIXI
and realistic-resources AGI systems, and in spite of the use of
AIXI as a demonstration that "AGI is all about coping with
resource restrictions", as I said above I really don't think that
"scaling AIXI down" is the best way to think about AGI
design.

I note that Shane Legg, who just spend a few years working on
AIXI related stuff for his PhD thesis, is now working on his
own AGI system --- but not by "scaling AIXI down", rather
by (according to my understanding) taking some ideas from
neuroscience and some original algorithmic/structural ideas, and
some general inspirations from his AIXI work.   My feeling
is that this sort of integrative and pragmatic approach is more
likely to succeed in terms of actually getting working AGI
software to exist...

-- Ben








Sorry Shane, I guess I got carried away with my sense of humor ...

No, I don't really think AIXI is useless in a mathematical, theoretical 
sense. 


I do think it's a dead-end in terms of providing guidance to
pragmatic AGI design, but that's another
story

I will send a clarifying email to the list, I certainly had no serious
intention to offend people...

Ben


Ben Goertzel wrote:


Sorry Shane, I guess I got carried away with my sense of humor ...

No, I don't really think AIXI is useless in a mathematical, 
theoretical sense.

I do think it's a dead-end in terms of providing guidance to
pragmatic AGI design, but that's another
story

I will send a clarifying email to the list, I certainly had no serious
intention to offend people...

Ben


Shane Legg wrote:

AplBen,

So you really think AIXI is totally "useless"?  I haven't been reading
Richard's comments, indeed I gave up reading his comments some
time before he got himself banned from sl4, however it seems that you
in principle support what he's saying.  I just checked his posts and
can see why they don't make sense, however I know very well that
shouting rather than reasoning on the internet is a waste of time.

My question to you then is a bit different.  If you believe that AIXI is
totally a waste of time, why is it that you recently published a book
with a chapter on AIXI in it, and now think that AIXI and related study
should be a significant part of what the SIAI does in the future?

Shane


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983 





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-08 Thread Ben Goertzel


Oops, guess that email WAS sent to the list, though I didn't realize it.

But no harm done!

Ben Goertzel wrote:


Sorry Shane, I guess I got carried away with my sense of humor ...

No, I don't really think AIXI is useless in a mathematical, 
theoretical sense.

I do think it's a dead-end in terms of providing guidance to
pragmatic AGI design, but that's another
story

I will send a clarifying email to the list, I certainly had no serious
intention to offend people...

Ben


Shane Legg wrote:

Ben,

So you really think AIXI is totally "useless"?  I haven't been reading
Richard's comments, indeed I gave up reading his comments some
time before he got himself banned from sl4, however it seems that you
in principle support what he's saying.  I just checked his posts and
can see why they don't make sense, however I know very well that
shouting rather than reasoning on the internet is a waste of time.

My question to you then is a bit different.  If you believe that AIXI is
totally a waste of time, why is it that you recently published a book
with a chapter on AIXI in it, and now think that AIXI and related study
should be a significant part of what the SIAI does in the future?

Shane


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983 





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-08 Thread Ben Goertzel


Sorry Shane, I guess I got carried away with my sense of humor ...

No, I don't really think AIXI is useless in a mathematical, theoretical 
sense. 


I do think it's a dead-end in terms of providing guidance to
pragmatic AGI design, but that's another
story

I will send a clarifying email to the list, I certainly had no serious
intention to offend people...

Ben


Shane Legg wrote:

Ben,

So you really think AIXI is totally "useless"?  I haven't been reading
Richard's comments, indeed I gave up reading his comments some
time before he got himself banned from sl4, however it seems that you
in principle support what he's saying.  I just checked his posts and
can see why they don't make sense, however I know very well that
shouting rather than reasoning on the internet is a waste of time.

My question to you then is a bit different.  If you believe that AIXI is
totally a waste of time, why is it that you recently published a book
with a chapter on AIXI in it, and now think that AIXI and related study
should be a significant part of what the SIAI does in the future?

Shane


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe (second thought!)

2007-03-07 Thread Ben Goertzel

Richard Loosemore wrote:

Eugen Leitl wrote:

On Wed, Mar 07, 2007 at 01:24:05PM -0500, Richard Loosemore wrote:


For each literary work n in N, use G to generate a universe u, and
within that universe, inject a copy of the literary work at a random
point in the spacetime of u. Measure the reaction, in terms of critical
acclaim generated by the work in any species who happen to be hanging 


I realize that this is sarcasm, but detecting the mere presence
of a species (nevermind their critical acclaim) from a trajectory,
then rather give me the infinite simians, and I will personally look
for Shakespeare sonnets in them.


No, no wait!  I change my mind about agreeing with you. ;-)

We don't have to wait for a species and then detect it, nor do we have 
to translate their language!


We just apply each n to *all* of the infinite universes generated by 
G.  In amongst those universes will be some in which English (assuming 
that n is defined over English words) just happens to arise by chance.


Then, all we do is compute the number of times that the name of the 
literary work appears in the same sentence with "literary 
masterpiece," at any time in the history of all the universes.


I can't see any other problems:  aside from the infinities, it looks 
like a perfectly regular algorithm to me.  As good as AIXI any day.


Richard, I will suggest one modification to your excellent algorithm.

What we really want to compute is the weighted sum over all universes U,

SUM_U [ 2^-K(U)  *  M(U, N) ]

where

M[U, N] is the "masterpiece index" of literary work N in universe U, 
properly normalized


K[U] is the length of the shortest program for generating the universe U

This weights being a masterpiece in universe U higher, if the universe U 
has a simpler description.


This way we get a convergent measure of masterpiece-ness across all 
universes.  (If we set up the math assumptions right, blah blah blah.)


With this change or something else similar, I think you have indeed 
outlined a perfect algorithm for generating literary masterpieces.


Congratulations!

I think this merits a Nobel Prize for Literature   I'll send the 
Nobel Committee an email nominating you right away


As you say, scaling down the algorithm to yield a practical automated 
literary genius may require a bit of work, but hey, the problem is 
Solved In Principle, and that's the most important thing!!  ;-)


-- Ben G





-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Uselessness of AIXI

2007-03-07 Thread Ben Goertzel





We will only know for sure whether AIXI theory was useful or
not when we can look back 1000 years from now.

Shane



And of course, if we succeed in creating superhuman AGIs at time T,
1000 human-years of scientific advance will likely occur within a rather
brief time-period after time T ;-)

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Uselessness of AIXI

2007-03-06 Thread Ben Goertzel

Matt Mahoney wrote:

--- Ben Goertzel <[EMAIL PROTECTED]> wrote:
  
What AIXI does is to continually search through the space of all 
possible programs, to find the one that in hindsight (based on 
probabilistic inference with an Occam prior) would have best helped it 
achieve its goals -- and then enact that program.


This is not something you can do on a realistic scale. 



That is not even something you can do with infinite computing power (a Turing
machine).  


OK, I phrased it imprecisely, but the precise phrasing uses math ... 
we've both read the paper...



But this does not make AIXI useless.  AIXI is useful precisely
because it tell us this.
  
AIXI may be useful in a theoretical conceptual philosophical sense, but 
it provides close to zero

pragmatic guidance for the task of actually creating AGIs in reality...

Ben


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] Uselessness of AIXI

2007-03-06 Thread Ben Goertzel




This would be the paper, everyone:
http://www.vetta.org/documents/IDSIA-12-06-1.pdf

Shane - first you smack down the Goedel machine, and now AIXI! Is it 
genuinely
useless in practice, do you think? Hutter says one of his current 
research priorities

is to shrink it down into something that can run on existing machines...


Of course it's genuinely useless in practice!

What AIXI does is to continually search through the space of all 
possible programs, to find the one that in hindsight (based on 
probabilistic inference with an Occam prior) would have best helped it 
achieve its goals -- and then enact that program.


This is not something you can do on a realistic scale. 

Sure, you can view any learning system as a specifically biased way of 
"searching through the space of all possible programs", but this insight 
gets you about .1% of the way toward designing a thinking machine...


To make an analogy: if you have enough time, you can generate a literary 
masterpiece by randomly typing characters until a masterpiece happens to 
appear.  However, "scaling down" this strategy to reasonably small 
amounts of time doesn't work very well.  In fact the strategy has nearly 
nothing to do with appropriate strategies for generating literary 
masterpieces given realistic amounts of time.  And the parallels one can 
draw (yes, random variation plays a role in creative insight, etc.) 
really don't get you very far toward knowing how to realistically 
produce a literary masterpiece.


-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-05 Thread Ben Goertzel

Hi Shane,

did become possible, won't the Block argument then
become a serious problem?  If you did have infinite computation then 
you could
just build an AIXI and be done.  There would be no point in building a 
different

system that was provably less powerful and yet more complex to construct.
Such a system could find a cure to cancer and rework all known mathematics
and extend it a billion fold in the blink of an eye... but would such 
a system
really be "intelligent"?  To me that seems like a completely pointless 
thing to
worry about in the presence of unlimited computation power.  It would 
be like
arguing that the plane I went on vacation on wasn't really flying 
because inside
it wasn't being driven by a mechanism that was producing bird poop.  
For me
the important point is that the plane achieves the function of 
flight.  This is what
I care about when going on vacation and it's the most useful concept 
of "flight"
to me.  The same would be true of intelligence; if it can work out how 
to cure
somebody of cancer and billions of other totally amazing things, in 
the end that
is what I care about.  I call it intelligence.  If you don't want to, 
then what I want

to achieve is not what you call intelligence.


I would phrase things a little differently, personally.

I definitely agree with you that "intelligence" should be conceived as 
having to do with

functionality, not internal structures/dynamics.

However, I don't think that intelligence is the only important goal; it 
is not the only

thing I want to achieve in creating artificial minds.

There is something, to me, profoundly unaesthetic about systems like 
AIXI.  If I had infinite
computing power, I'd rather build an AIXI **plus** some other systems 
with more
aesthetic baggage like self, will, awareness, feeling, etc.  The 
computing power of this
combined set of systems would not exceed that of AIXI, but the aesthetic 
quality

would, according to my own aesthetics ;-)

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Ben Goertzel




Richard, I long ago proposed a working definition of intelligence as 
"Achieving complex goals in complex environments."  I then went 
through a bunch of trouble to precisely define all the component 
terms of that definition; you can consult the Appendix to my 2006 
book "The Hidden Pattern"Shane Legg and Marcus Hutter 
have proposed a related definition of intelligence in a recent paper...


Anyone can propose a definition.  The point of my objection is that a 
definition has to have some way to be compared against reality.


Suppose I define intelligence to be:

"A funtion that maps goals G and world states W onto action states A, 
where G, W and A are any mathematical entities whatsoever."


That would make any function that maps X [cross] Y into Z an 
"intelligence".


Such a definition would be pointless.  The question is *why* would it 
be pointless?  What criteria are applied, in order to determine 
whether the definition has something to the thing that in everyday 
life we call intelligence.


The difficulty in comparing my definition against reality is that my 
definition defines intelligence relative to a "complexity" measure.


For this reason, it is fundamentally a subjective definition of 
intelligence, except in the unrealistic case where "degree of complexity 
tends to infinity" (in which case all "reasonably general" complexity 
measures become equivalent, due to bisimulation of Turing machines).


To qualitatively compare my definition to the "everyday life" definition 
of intelligence, we can check its consistency with our everyday life 
definition of "complexity."   Informally, at least, my definition seems 
to check out to me: intelligence according to an IQ test does seem to 
have something to do with the ability to achieve complex goals; and, the 
reason we think IQ tests mean anything is that we think the ability to 
achieve complex goals in the test-context will correlate with the 
ability to achieve complex goals in various more complex environments 
(contexts).


Anyway, if I accept for instance **Richard Loosemore** as a measurer of 
the complexity of environments and goals, then relative to 
Richard-as-a-complexity-measure, I can assess the intelligence of 
various entities, using my definition


In practice, in building a system like Novamente, I'm relying on modern 
human culture's "consensus complexity measure" and trying to make a 
system that, according to this measure, can achieve a diverse variety of 
complex goals in complex situations...


P.S.  Quick sanity check:  you know the last comment in the quote you 
gave (about loking in the dictionary) was Matt's, not mine, right?




Yes...

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-04 Thread Ben Goertzel

Richard Loosemore wrote:

Matt Mahoney wrote:

--- Richard Loosemore <[EMAIL PROTECTED]> wrote:

What I wanted was a set of non-circular definitions of such terms as 
"intelligence" and "learning", so that you could somehow 
*demonstrate* that your mathematical idealization of these terms 
correspond with the real thing, ... so that we could believe that 
the mathematical idealizations were not just a fantasy.


The last time I looked at a dictionary, all definitions are 
circular.  So you

win.


Richard, I long ago proposed a working definition of intelligence as 
"Achieving complex goals in complex environments."  I then went through 
a bunch of trouble to precisely define all the component terms of that 
definition; you can consult the Appendix to my 2006 book "The Hidden 
Pattern" 

Shane Legg and Marcus Hutter have proposed a related definition of 
intelligence in a recent paper...


-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Scenarios for a simulated universe

2007-03-01 Thread Ben Goertzel

Matt Mahoney wrote:

--- Jef Allbright <[EMAIL PROTECTED]> wrote:

  

On 3/1/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:



What I argue is this: the fact that Occam's Razor holds suggests that the
universe is a computation.
  

Matt -

Would you please clarify how/why you think B follows from A in your
preceding statement?



Hutter's proof requires that the environment have a computable distribution.
http://www.hutter1.net/ai/aixigentle.htm

So in any universe of this type, Occam's Razor should hold.  If Occam's Razor 
did not hold, then we could conclude that the universe is not computable.  The

fact that Occam's Razor does hold means we cannot rule out the possibility
that the universe is simulated.

  


Matt, I really don't see why you think Hutter's work shows that "Occam's 
Razor holds" in any
context except AI's with unrealistically massive amounts of computing 
power (like AIXI and AIXItl)


In fact I think that it **does** hold in other contexts (as a strategy 
for reasoning by modest-resources
minds like humans or Novamente), but I don't see how Hutter's work shows 
this...


-- Ben G


-- Matt Mahoney, [EMAIL PROTECTED]

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983

  


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: [singularity] Vinge & Goerzel = Uplift Academy's Good Ancestor Principle Workshop 2007

2007-02-19 Thread Ben Goertzel

Joshua Fox wrote:

Any comments on this: http://news.com.com/2100-11395_3-6160372.html

Google has been mentioned in the context of  AGI, simply because they 
have money, parallel processing power, excellent people, an 
orientation towards technological innovation, and important narrow AI 
successes and research goals. Do Page's words mean that Google is 
seriously working towards AGI? If so, does anyone know the people 
involved? Do they have a chance and do they understand the need for 
Friendliness?


This topic has come up intermittently over the last few years...

Google can't be counted out, since they have a lot of $$ and machines 
and a lot of smart people.


However, no one has ever pointed out to me a single Google hire with a 
demonstrated history of serious thinking about AGI -- as opposed to 
statistical language processing, machine learning, etc.  

That doesn't mean they couldn't have some smart staff who shifted 
research interest to AGI after moving to Google, but it doesn't seem 
tremendously likely.


Please remember that the reward structure for technical staff within 
Google is as follows: Big bonuses and copious approval go to those who 
do cool stuff that actually gets incorporated in Google's customer 
offerings  I don't have the impression they are funding a lot of 
blue-sky AGI research outside the scope of text search, ad placement, 
and other things related to their biz model.


So, my opinion remains that: Google staff described as working on "AI" 
are almost surely working on clever variants of highly scalable 
statistical language processing.   So, if you believe that this kind of 
work is likely to lead to powerful AGI, then yeah, you should attach a 
fairly high probability to the outcome that Google will create AGI.  
Personally I think it's very unlikely (though not impossible) that AGI 
is going to emerge via this route.


Evidence arguing against this opinion is welcomed ;-)

-- Ben G



Also: Vinge's notes on his Long Now Talk, "What If the Singularity 
Does NOT Happen"  are at   
http://www-rohan.sdsu.edu/faculty/vinge/longnow/index.htm 



I'm delighted to see counter-Singularity analysis from a respected 
Singularity thinker. This further reassurance that the the flip-side 
is being considered deepens my beliefs in pro-Singularity arguments.


Joshua



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983 


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] Storytelling, empathy and AI

2006-12-20 Thread Ben Goertzel

This post is a brief comment on PJ Manney's interesting essay,

http://www.pj-manney.com/empathy.html

Her point (among others) is that, in humans, storytelling is closely
tied with empathy, and is a way of building empathic feelings and
relationships.  Mirror neurons and other related mechanisms are
invoked.

I basically agree with all this.

However, I would add that among AI's with a nonhuman cognitive
architecture, this correlation need not be the case.  Humans are built
so that among humans storytelling helps build empathy.  OTOH, for an
AI storytelling might not increase empathy one whit.

It is interesting to think specifically about the architectural
requirements that "having storytelling increase empathy" may place on
an AI system.

For example, to encourage the storytelling/empathy connection to exist
in an AI system, one might want to give the system an explicit
cognitive process of hypothetically "putting itself in someone else's
place."  So, when it hears a story about character X, it creates
internally a fabricated story in which it takes the place of character
X.  There is no reason to think this kind of strategy would come
naturally to an AI, particularly given its intrinsic dissimilarity to
humans.  But there is also no reason that kind of strategy couldn't be
forced, with the impact of causing the system to understand humans
better than it might otherwise.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: Re: [singularity] Ten years to the Singularity ??

2006-12-20 Thread Ben Goertzel

Yes, this is one of the things we are working towards with Novamente.
Unfortunately, meeting this "low barrier" based on a genuine AGI
architecture is a lot more work than doing so in a more bogus way
based on an architecture without growth potential...

ben

On 12/20/06, Joshua Fox <[EMAIL PROTECTED]> wrote:



Ben,

If I am beating a dead horse, please feel free to ignore this, but I'm
imagining a prototype that shows glimmerings of AGI. Such a system, though
not useful or commercially viable, would  sometimes act in interesting, even
creepy, ways. It might be inconsistent and buggy, and work in a limited
domain.

This sets a low barrier, since existing systems occasionally meet this
description. The key difference is that the hypothesized prototype would
have an AGI engine under it and would rapidly improve.

Joshua



> According the approach I have charted out (the only one I understand),
> the true path to AGI does not really involve commercially valuable
> intermediate stages.  This is for reasons similar to the reasons that
> babies are not very economically useful.
>
> .But my best guess is that this is an illusion.  IMO by
> far the best path to a true AGI is by building an artificial baby and
> educating it and incrementally improving it, and by its very nature
> this path does not lead to incremental commercially viable results.
>

 
 This list is sponsored by AGIRI: http://www.agiri.org/email

To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: [singularity] Ten years to the Singularity ??

2006-12-15 Thread Ben Goertzel

Well, the requirements to **design** an AGI on the high level are much
steeper than the requirements to contribute (as part of a team) to the
**implementation** (and working out of design details) of AGI.

I dare say that anyone with a good knowledge of C++, Linux, and
undergraduate computer science -- and who has done a decent amount of
reading in cognitive science -- has the background to contribute to an
AGI project such as Novamente.

Perhaps the Novamente project is now at the stage where it could
benefit from 3-4 "junior" AI software developers.  But even if so, the
problem still exists of finding say $100K to pay these folks for a
year.  Still, this is not so much funding to find, and it's an
interesting possible direction to take.  So far  I have been skeptical
of the ability of more "junior" folks to really contribute, but I
think the project may be at a level of maturity now where this may be
sensible...

Something for me to think about during the holidays...

-- Ben

On 12/15/06, Hank Conn <[EMAIL PROTECTED]> wrote:

"I'm also surprised there aren't more programmers or AGI enthusiasts who
aren't willing to work for beans to further this goal.  We're just two
students in Arizona, but we'd both gladly give up our current lives to work
for 15-20G's a year and pull 80 hour weeks eating this stuff up.  Having a
family is valid excuse, but there are others out there who aren't tied
down.  We may not have PhD's, but we learn quickly."

I know a lot of people in this position (myself included)... although I
think the problem is that creating AGI requires you to have a lot of
background knowledge and experience to be able design and solve problems on
that level (way more than I have probably).

-hank


On 12/12/06, Josh Treadwell <[EMAIL PROTECTED]> wrote:
>
> What kind of numbers are we talking here to fund a single AGI project like
Novamente?  If I could, I'd instantly dedicate all my time and resources to
developing AI, but because most of my knowledge is auto didactic, I don't
get considered for any jobs.  So for now, I'm stuck in the drudgery of
working 60 hours a week doing IT, while struggling to complete and pay for
college.  As soon as I get out of school I'll have to start paying off
student loans, which won't be feasable in an AGI position (due to lack of
adequate funding).
>
> Thus, a friend of mine and I have decided to take the lower road and start
building lame websites (myspace profile template pages, ggle.com like
pages, other lame ad-words pages) in order to (a) quit our jobs, and (b)
fund our own or others research.  It boggles my mind that no one has become
financially successful and decided to throw a significant sum of money at
Novamente and the like.  For the love of Pete, sacrificing a single
Budweiser Superbowl commercial could fund years of AGI research.  I'm also
surprised there aren't more programmers or AGI enthusiasts who aren't
willing to work for beans to further this goal.  We're just two students in
Arizona, but we'd both gladly give up our current lives to work for 15-20G's
a year and pull 80 hour weeks eating this stuff up.  Having a family is
valid excuse, but there are others out there who aren't tied down.  We may
not have PhD's, but we learn quickly.
>
>
> BTW Ben, for the love of God, can you please tell me when your AGI book is
coming out?  It's been in my Amazon shopping cart for 6 months now!  How
about I just pay you via paypal, and you send me a PDF?
>
>
> Josh Treadwell
> [EMAIL PROTECTED]
> 
 This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=11983
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>

 

 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: [singularity] Ten years to the Singularity ??

2006-12-12 Thread Ben Goertzel

 BTW Ben, for the love of God, can you please tell me when your AGI book is
coming out?  It's been in my Amazon shopping cart for 6 months now!


The publisher finally mailed me a copy of the book last week!

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: Re: [singularity] Ten years to the Singularity ??

2006-12-12 Thread Ben Goertzel

Hi,


You mention "intermediate steps to AI", but the question is whether these
are narrow-AI applications (the bane of AGI projects) or some sort of
(incomplete) AGI.


According the approach I have charted out (the only one I understand),
the true path to AGI does not really involve commercially valuable
intermediate stages.  This is for reasons similar to the reasons that
babies are not very economically useful.

So, yeah, the only way I see to use commercial AI to fund AGI is to
build narrow-AI projects and sell them, and do a combination of

a) using the profits to fund AGI
b) using common software components btw the narrow-AI and AGI systems,
so the narrow-AI work can help the AGI directly to some extent

Of course, if you believe (as e.g. the Google founders do) that Web
search can be a path to AGI, then you have an easier time of it,
because there is commercial work that appears to be on the direct path
to true AGI.  But my best guess is that this is an illusion.  IMO by
far the best path to a true AGI is by building an artificial baby and
educating it and incrementally improving it, and by its very nature
this path does not lead to incremental commercially viable results.

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: [singularity] Ten years to the Singularity ??

2006-12-11 Thread Ben Goertzel

The exponential growth pattern holds regardless of whether you
normalize by global population size or not...

-- Ben

On 12/11/06, Chuck Esterbrook <[EMAIL PROTECTED]> wrote:

Regarding de Garis' graph of the number of people who've died in
different wars throughout history, are the numbers raw or divided by
the population size?

-Chuck

On 12/11/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> Hi,
>
> For anyone who is curious about the talk "Ten Years to the Singularity
> (if we Really Really Try)" that I gave at Transvision 2006 last
> summer, I have finally gotten around to putting the text of the speech
> online:
>
> http://www.goertzel.org/papers/tenyears.htm
>
> The video presentation has been online for a while
>
> video.google.com/videoplay?docid=1615014803486086198
>
> (alas, the talking is a bit slow in that one, but that's because the
> audience was in Finland and mostly spoke English as a second
> language.)  But the text may be preferable to those who, like me, hate
> watching long videos of people blabbering ;-)
>
> Questions, comments, arguments and insults (preferably clever ones) welcome...
>
> -- Ben
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=11983
>

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


Re: Re: Re: [singularity] Ten years to the Singularity ??

2006-12-11 Thread Ben Goertzel

My main reason for resisting the urge to open-source Novamente is AGI
safety concerns.

At the moment Novamente is no danger to anyone, but once it gets more
advanced, I worry about irresponsible people forking the codebase
privately and creating an AGI customized for malicious purposes...

This is an issue I'm still thinking over, but anyway, that is my  main
reason for not having gone the open-source route up to this point...

-- Ben

On 12/11/06, Bo Morgan <[EMAIL PROTECTED]> wrote:


Ben,

My A.I. group of friends (was: CommonSense Computing Group, and is now
more scattered) has been trying to do an open-source development for a set
of programs that are working toward human-scale intelligence.  For
example, Hugo Liu's commonsense reasoning toolkit, ConceptNet, was ported
from Python to many other more efficient versions by the internet
community at large (and used in many other research projects in our lab
and around the world).

Have you thought about releasing your A.G.I. codebase that you mentioned
to the general public so that it can be developed by everyone?  I, for
one, would be interested in downloading it and trying it out.

I realize that research software is often not documented or easily
digestable, but it seems like one of the most efficient ways to attack
the software development problem.

Bo

On Mon, 11 Dec 2006, Ben Goertzel wrote:

) Hi Joshua,
)
) Thanks for the comments
)
) Indeed, the creation of a thinking machine is not a typical VC type
) project.  I know a few VC's personally and am well aware of their way
) of thinking and the way thir businesses operate.  There is a lot of
) "technology risk" in the creation of an AGI, as compared to the sorts
) of projects that VC's are typical interested in funding today.  There
) is just no getting around this fact.  From a typical VC perspective,
) building a thinking machine is a project with too much risk and too
) much schedule uncertainty in spite of the obviously huge payoff upon
) success.
)
) Of course, it's always possible a rule-breaking VC could come along
) with an interest in AGI.  VC's have funded nanotech projects with a
) 10+ year timescale to product, for example.
)
) Currently our fundraising focus is on:
)
) a) transhumanist angel investors interested in funding the creation of true
) AGI
)
) b) seeking VC money with a view toward funding the rapid construction
) and monetization of software products that are
) -- based on components of our AGI codebase
) -- incremental steps toward AGI.
)
) With regard to b, we are currently working with a business consultant
) to formulate a professional "investor toolkit" to present to
) interested VC's.
)
) Unfortunately, US government grant funding for out-of-the-mainstream
) AGI projects is very hard to come by these days.  OTOH, the Chinese
) government has expressed some interest in Novamente, but that funding
) source has some serious issues involved with it, needless to say...
)
) -- Ben G
)
)
) On 12/11/06, Joshua Fox <[EMAIL PROTECTED]> wrote:
) >
) > Ben,
) >
) > I saw the video.  It's wonderful to see this direct aim at the goal of the
) > positive Singularity.
) >
) > If I could comment from the perspective of the software industry, though
) > without expertise in the problem space, I'd say that there are some phrases
) > in there which would make me, were I a VC, suspicious. (Of course VC's
) > aren't the direct audience, but ultimately someone has to provide the
) > funding you allude to.)
) >
) > When a visionary says that he requires more funding and ten years, this
) > often indicates an unfocused project that will never get on-track. In
) > software projects it is essential to aim for real results, including a beta
) > within a year and multiple added-value-providing versions within
) > approximately 3 years. I think that this is not just investor impatience --
) > experience shows that software projects planned for a much longer schedule
) > tend to get off-focus.
) >
) > I know that you already realize this, and that you do have the focus; you
) > mention your plans, which I assume include meaningful intermediate
) > achievements in this incredibly challenging and extraordinary task, but this
) > the impression which comes across in the talk.
) >
) > Yours,
) >
) > Joshua
) >
) >
) >
) > 2006/12/11, Ben Goertzel <[EMAIL PROTECTED]>:
) > >
) > > Hi,
) > >
) > > For anyone who is curious about the talk "Ten Years to the Singularity
) > > (if we Really Really Try)" that I gave at Transvision 2006 last
) > > summer, I have finally gotten around to putting the text of the speech
) > > online:
) > >
) > > http://www.goertzel.org/papers/tenyears.htm
) > >
) > > The video presentation has been online for a wh

Re: Re: [singularity] Ten years to the Singularity ??

2006-12-11 Thread Ben Goertzel

Hi Joshua,

Thanks for the comments

Indeed, the creation of a thinking machine is not a typical VC type
project.  I know a few VC's personally and am well aware of their way
of thinking and the way thir businesses operate.  There is a lot of
"technology risk" in the creation of an AGI, as compared to the sorts
of projects that VC's are typical interested in funding today.  There
is just no getting around this fact.  From a typical VC perspective,
building a thinking machine is a project with too much risk and too
much schedule uncertainty in spite of the obviously huge payoff upon
success.

Of course, it's always possible a rule-breaking VC could come along
with an interest in AGI.  VC's have funded nanotech projects with a
10+ year timescale to product, for example.

Currently our fundraising focus is on:

a) transhumanist angel investors interested in funding the creation of true AGI

b) seeking VC money with a view toward funding the rapid construction
and monetization of software products that are
-- based on components of our AGI codebase
-- incremental steps toward AGI.

With regard to b, we are currently working with a business consultant
to formulate a professional "investor toolkit" to present to
interested VC's.

Unfortunately, US government grant funding for out-of-the-mainstream
AGI projects is very hard to come by these days.  OTOH, the Chinese
government has expressed some interest in Novamente, but that funding
source has some serious issues involved with it, needless to say...

-- Ben G


On 12/11/06, Joshua Fox <[EMAIL PROTECTED]> wrote:


Ben,

I saw the video.  It's wonderful to see this direct aim at the goal of the
positive Singularity.

If I could comment from the perspective of the software industry, though
without expertise in the problem space, I'd say that there are some phrases
in there which would make me, were I a VC, suspicious. (Of course VC's
aren't the direct audience, but ultimately someone has to provide the
funding you allude to.)

When a visionary says that he requires more funding and ten years, this
often indicates an unfocused project that will never get on-track. In
software projects it is essential to aim for real results, including a beta
within a year and multiple added-value-providing versions within
approximately 3 years. I think that this is not just investor impatience --
experience shows that software projects planned for a much longer schedule
tend to get off-focus.

I know that you already realize this, and that you do have the focus; you
mention your plans, which I assume include meaningful intermediate
achievements in this incredibly challenging and extraordinary task, but this
the impression which comes across in the talk.

Yours,

Joshua



2006/12/11, Ben Goertzel <[EMAIL PROTECTED]>:
>
> Hi,
>
> For anyone who is curious about the talk "Ten Years to the Singularity
> (if we Really Really Try)" that I gave at Transvision 2006 last
> summer, I have finally gotten around to putting the text of the speech
> online:
>
> http://www.goertzel.org/papers/tenyears.htm
>
> The video presentation has been online for a while
>
> video.google.com/videoplay?docid=1615014803486086198
>
> (alas, the talking is a bit slow in that one, but that's because the
> audience was in Finland and mostly spoke English as a second
> language.)  But the text may be preferable to those who, like me, hate
> watching long videos of people blabbering ;-)
>
> Questions, comments, arguments and insults (preferably clever ones)
welcome...
>
> -- Ben
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?list_id=11983
>

 
 This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] Ten years to the Singularity ??

2006-12-11 Thread Ben Goertzel

Hi,

For anyone who is curious about the talk "Ten Years to the Singularity
(if we Really Really Try)" that I gave at Transvision 2006 last
summer, I have finally gotten around to putting the text of the speech
online:

http://www.goertzel.org/papers/tenyears.htm

The video presentation has been online for a while

video.google.com/videoplay?docid=1615014803486086198

(alas, the talking is a bit slow in that one, but that's because the
audience was in Finland and mostly spoke English as a second
language.)  But the text may be preferable to those who, like me, hate
watching long videos of people blabbering ;-)

Questions, comments, arguments and insults (preferably clever ones) welcome...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=11983


[singularity] Goertzel meets Sirius

2006-10-31 Thread Ben Goertzel

Me, interviewed by R.U. Sirius, on AGI, the Singularity, philosophy of
mind/emotion/immortality and so forth:

http://mondoglobo.net/neofiles/?p=78

Audio only...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[singularity] DC Future Salon - Metaverse Roadmap - Weds Nov 8, 7-9 PM

2006-10-31 Thread Ben Goertzel

For anyone in the DC area, the following event may be interesting...

Not directly AGI-relevant, but interesting in that one day virtual
worlds like Second Life may be valuable for AGI in terms of giving
them a place to play around and interact with humans, without need for
advanced robotics...

-- Ben G

**



Hello Futurists,

A quick reminder that our November meeting is next Wednesday...

On Weds. Nov. 8, 7-9 PM, the DC Future Salon will cover
metaverse world development with Electric Sheep Company CEO Sibley
Verbeck and other ESC staff members. To accommodate a more
interactive online multimedia presentation, we will be meeting at ESC's
offices in downtown DC, location below.

LOCATION: (note - different from our usual location)
19th & L (red line to Farragut North)
1133 19th St., NW, on the 9th floor; shared offices with Common Cause

WHEN:
Wednesday, November 8, 2006
7:00 PM - 9:30 PM

RECENT PRESS:
http://www.electricsheepcompany.com/press.php

DESCRIPTION:
Several Electric Sheep Company staff members will be discussing the
Metaverse Roadmap and Electric Sheep Company's role and activities in
the roadmapping project. The Metaverse Roadmap
(http://metaverseroadmap.org/) is a visioning and execution project
sponsored by the Accelerating Change Foundation to develop a coherent
path to the Internet dominated by 3D technology, social spaces and
economies. The Electric Sheep Company is a DC-based interactive creation
agency designing 3D sims and other products and services for Second
Life. Electric Sheep Company's founder Sibley Verbeck was featured in
Washington Techway Magazine as one of DC's top young technology
executives and selected as one of MIT Technology Review's top 100
technology innovators worldwide under the age of 35 in 2003.

Interview with Electric Sheep Company CEO Sibley "Hathor" Verbeck
http://www.jasonpettus.com/inthegrid/2006/10/the_man_in_the_high_castle_an_1.html

SECOND LIFE:
Second Life is a 3-D virtual world started in 2003 by San
Francisco-based Linden Lab. Users can sign up for free and create
avatars to represent themselves; then start exploring the 3D world,
interacting with others and creating objects. Second Life just reached
over 1 million resident accounts with about half being active in the
last 60 days. $7m USD worth of transactions is occurring in-world each
month.

FUTURE SALON SPEAKERS:
Sibley "Sibley Hathor" Verbeck, CEO, Electric Sheep Company
Jonah "Hank Hoodoo" Gold, COO, Electric Sheep Company
Chris "Satchmo Prototype" Carella, Producer, Electric Sheep Company
Becky "Digi Vox" Carella, Developer, Electric Sheep Company
Bios: http://www.electricsheepcompany.com/people.php

COST: Free

NEXT MEETING:
Weds, Dec 6: Specific topic TBD but in the general area of Artificial
General Intelligence innovations with Dr. Moshe Looks.

JOIN THE EMAIL LIST:
http://tech.groups.yahoo.com/group/dcfuture/

ADD THE EVENT TO YOUR CALENDAR:
http://upcoming.org/event/112608

THE SALON:
The Washington DC Future Salon is part of a nationwide network of future
salons discussing cutting edge science and technology innovations and
their implications: http://accelerating.org/futuresalon.html
YahooGroup and Homepage
http://tech.groups.yahoo.com/group/dcfuture/
http://www.agiri.org/forum/index.php?act=ST&f=33&t=127


--

Melanie Swan
Co-moderator DC Future Salon
Phone: 415-505-4426
Fax: 801-772-6349
http://futurememes.blogspot.com
http://www.melanieswan.com

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Re: Motivational Systems that are stable

2006-10-30 Thread Ben Goertzel

Hi,


I feel a little sad, however, that you simultaneously bow out of the
debate AND fire some closing shots, in the form of a new point (the
issue of whether or not this is "proof") and some more complaints about
the "vague statements" in my emails.  I clearly cannot reply to these,
because you just left the floor.


Sorry about that...

Anyway I look forward to reading your papers, etc. ;-)

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-30 Thread Ben Goertzel

Hi Richard,

Let me go back to start of this dialogue...

Ben Goertzel wrote:

Loosemore wrote:

> The motivational system of some types of AI (the types you would
> classify as tainted by complexity) can be made so reliable that the
> likelihood of them becoming unfriendly would be similar to the
> likelihood of the molecules of an Ideal Gas suddenly deciding to split
> into two groups and head for opposite ends of their container.


Wow!  This is a vey strong hypothesis  I really doubt this
kind of certainty is possible for any AI with radically increasing
intelligence ... let alone a complex-system-type AI with highly
indeterminate internals...

I don't expect you to have a proof for this assertion, but do you have
an argument at all?


Your subsequent responses have shown that you do have an argument, but
not anything close to a proof.

And, your argument has not convinced me, so far.  Parts of it seem
vague to me, but based on my limited understanding of your argument, I
am far from convinced that AI systems of the type you describe, under
conditions of radically improving intelligence, "can be made so
reliable that the likelihood of them becoming unfriendly would be
similar to the likelihood of the molecules of an Ideal Gas suddenly
deciding to split into two groups and head for opposite ends of their
container."

At this point, my judgment is that carrying on this dialogue further
is not the best expenditure of my time.  Your emails are long and
complex mixtures of vague and precise statements, and it takes a long
time for me to read them and respond to them with even a moderate
level of care.

I remain interested in your ideas and if you write a paper or book on
your ideas I will read it as my schedule permits.  But I will now opt
out of this email thread.

Thanks,
Ben


On 10/30/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


Ben,

I guess the issue I have with your critique is that you say that I have
given no details, no rigorous argument, just handwaving, etc.

But you are being contradictory:  on the one hand you say that the
proposal is vague/underspecified/does not give any arguments  but
then having said that, you go on to make specific criticisms and say
that it is wrong on this or that point.

I don't think you can have it both ways.  Either you don't see an
argument, and rest your case, or you do see an argument and want to
critique it.  You are trying to do both:  you repeatedly make broad
accusations about the quality of the proposal ("some very hand-wavy,
intuitive suggestions", "you have not given any sort of rigorous
argument", "... your intuitive suggestions...", "you did not give any
details as to why you think your proposal will 'work'", etc. etc.), but
then go on to make specific points about what is wrong with it.

Now, if the specific points you make were valid criticisms, I could
perhaps overlook the inconsistency and just address the criticisms.  But
that is exactly what I just did, and your specific criticisms, as I
explained in the last message, were mostly about issues that had nothing
to do with the general class of architectures I proposed, but only with
weird cases or weird issues that had no bearing on my case.

Since you just dropped most of those issues (except one, which I will
address in a moment), I must assume that you accept that I have given a
good reply to each of them.  But instead of conceding that the argument
I gave must therefore have some merit, you repeat -- even more
insistently than before -- that there is nothing in the argument, that
it is all just vague handwaving etc.

No fair!

This kind of response:

   -  Your argument is either too vague or I don't understand it.

Would be fine, and I would just try to clarify it in the future.

But this response:

   -  This is all just handwaving, with no details and no argument.
   -  It is also a wrong argument, for these reasons:
   -  [Reasons that are mostly just handwaving or irrelevant].

Is not so good.

*

I will say something about the specific point you make about my claim
that as time goes on the system will check new ideas against previous
ones to make sure that new ones are consistent with ALL the old ones, so
therefore it will become more and more stable.

What you have raised is a minor technical issue, together with some
confusion about what exactly I meant:

The "ideas" being checked against "all previous ideas" are *not* the
incoming general learned concepts (cup, salt, cricket, democracy,
sneezes. etc.) but the concepts related to planned actions and the
system's base of moral/ethical/motivational concerns.  Broadly speaking,
it is when there is a new "perhaps I should do this ..." idea that the
comparison starts.  I did actually say this, but it was a little
obscurely worded.

Now, when I said &quo

[singularity] Fwd: "After Life" by Simon Funk

2006-10-29 Thread Ben Goertzel

FYI

-- Forwarded message --
From: Eliezer S. Yudkowsky <[EMAIL PROTECTED]>
Date: Oct 30, 2006 12:14 AM
Subject: "After Life" by Simon Funk
To: [EMAIL PROTECTED]


http://interstice.com/~simon/AfterLife/index.html

An online novella, with hardcopy purchaseable from Lulu.
Theme: Uploading.
Author: >H/rationalist.

--
Eliezer S. Yudkowsky  http://singinst.org/
Research Fellow, Singularity Institute for Artificial Intelligence

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [agi] Re: [singularity] Motivational Systems that are stable

2006-10-29 Thread Ben Goertzel

Hi,


There is something about the gist of your response that seemed strange
to me, but I think I have put my finger on it:  I am proposing a general
*class* of architectures for an AI-with-motivational-system.  I am not
saying that this is a specific instance (with all the details nailed
down) of that architecture, but an entire class. an approach.

However, as I explain in detail below, most of your criticisms are that
there MIGHT be instances of that architecture that do not work.


No.   I don't see why there will be any instances of your architecture
that do "work" (in the sense of providing guaranteeable Friendliness
under conditions of radical, intelligence-increasing
self-modification).

And you have not given any sort of rigorous argument that such
instances will exist

Just some very hand-wavy, intuitive suggestions, centering on the
notion that (to paraphrase) "because there are a lot of constraints, a
miracle happens"  ;-)

I don't find your intuitive suggestions foolish or anything, just
highly sketchy and unconvincing.

I would say the same about Eliezer's attempt to make a Friendly AI
architecture in his old, now-repudiated-by-him essay Creating a
Friendly AI.  A lot in CFAI seemed plausible to me , and the intuitive
arguments were more fully fleshed out than your in your email
(naturally, because it was an article, not an email) ... but in the
end I felt unconvinced, and Eliezer eventually came to agree with me
(though not on the best approach to fixing the problems)...


 > In a radically self-improving AGI built according to your
 > architecture, the set of constraints would constantly be increasing in
 > number and complexity ... in a pattern based on stimuli from the
 > environment as well as internal stimuli ... and it seems to me you
 > have no way to guarantee based on the smaller **initial** set of
 > constraints, that the eventual larger set of constraints is going to
 > preserve "Friendliness" or any other criterion.

On the contrary, this is a system that grows by adding new ideas whose
motivatonal status must be consistent with ALL of the previous ones, and
the longer the system is allowed to develop, the deeper the new ideas
are constrained by the sum total of what has gone before.


This does not sound realistic.  Within realistic computational
constraints, I don't see how an AI system is going to verify that each
of its new ideas is consistent with all of its previous ideas.

This is a specific issue that has required attention within the
Novamente system.  In Novamente, each new idea is specifically NOT
required to be verified for consistency against all previous ideas
existing in the system, because this would make the process of
knowledge acquisition computationally intractable.  Rather, it is
checked for consistency against those other pieces of knowledge with
which it directly interacts.  If an inconsistency is noticed, in
real-time, during the course of thought, then it is resolved
(sometimes by a biased random decision, if there is not enough
evidence to choose between two inconsistent alternatives; or
sometimes, if the matter is important enough, by explicitly
maintaining two inconsistent perspectives in the system, with separate
labels, and an instruction to pay attention to resolving the
inconsistency as more evidence comes in.)

The kind of distributed system you are describing seems NOT to solve
the computational problem of verifying the consistency of each new
knowledge item with each other knowledge item.



Thus:  if the system has grown up and acquired a huge number of examples
and ideas about what constitutes good behavior according to its internal
system of values, then any new ideas about new values must, because of
the way the system is designed, prove themselves by being compared
against all of the old ones.


If each idea must be compared against all other ideas, then cognition
has order n^2 where n is the number of ideas.  This is not workable.
Some heuristic shortcuts must be used to decrease the number of
comparisons, and such heuristics introduce the possibility of error...


And I said "ridiculously small chance" advisedly:  if 10,000 previous
constraints apply to each new motivational idea, and if 9,900 of them
say 'Hey, this is inconsistent with what I think is a good thing to do',
then it doesn't have a snowball's chance in hell of getting accepted.
THIS is the deep potential well I keep referring to.


The problem, as I said, is posing a set of constraints that is both
loose enough to allow innovative new behaviors, and tight enough to
prevent the wrong behaviors...


I maintain that we can, during early experimental work, understand the
structure of the motivational system well enough to get it up to a
threshold of acceptably friendly behavior, and that beyond that point
its stability will be self-reinforcing, for the above reasons.


Well, I hope so ;-)

I don't rule out the possibility, but I don't feel you've argued for
it convi

Re: Re: [singularity] Re: [agi] Motivational Systems that are stable

2006-10-28 Thread Ben Goertzel

Hi,


The problem, Ben, is that your response amounts to "I don't see why that
would work", but without any details.


The problem, Richard, is that you did not give any details as to why
you think your proposal will "work" (in the sense of delivering a
system whose Friendliness can be very confidently known)


The central claim was that because the behavior of the system is
constrained by a large number of connections that go from motivational
mechanism to thinking mechanism, the latter is tightly governed.


But this claim, as stated, seems not to be true  The existence of
a large number of constraints does not intrinsically imply "tight
governance."

Of course, though, one can posit the existence of a large number of
constraints that DOES provide tight governance.

But the question then becomes whether this set of constraints can
simultaneously provide

a) the tightness of governance needed to guarantee Friendliness

b) the flexibility of governance needed to permit general, broad-based learning

You don't present any argument as to why this is going to be the case

I just wonder if, in this sort of architecture you describe, it is
really possible to guarantee Friendliness without hampering creative
learning.  Maybe it is possible, but you don't give an argument re
this point.

Actually, I suspect that it probably **is** possible to make a
reasonably benevolent AGI according to the sort of NN architecture you
suggest ... (as well as according to a bunch of other sorts of
architectures)

However, your whole argument seems to assume an AGI with a fixed level
of intelligence, rather than a constantly self-modifying and improving
AGI.  If an AGI is rapidly increasing its hardware infrastructure and
its intelligence, then I maintain that guaranteeing its Friendliness
is probably impossible ... and your argument gives no way of getting
around this.

In a radically self-improving AGI built according to your
architecture, the set of constraints would constantly be increasing in
number and complexity ... in a pattern based on stimuli from the
environment as well as internal stimuli ... and it seems to me you
have no way to guarantee based on the smaller **initial** set of
constraints, that the eventual larger set of constraints is going to
preserve "Friendliness" or any other criterion.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [singularity] Convincing non-techie skeptics that the Singularity isn't total bunk

2006-10-28 Thread Ben Goertzel

Hi,


Do most in the filed believe that only a war can advance technology to
the point of singularity-level events?
Any opinions would be helpful.


My view is that for technologies involving large investment in
manufacturing infrastructure, the US military is one very likely
source of funds.  But not the only one.  For instance, suppose that
computer manufacturers decide they need powerful nanotech in order to
build better and better processors: that would be a convincing
nonmilitary source for massive nanotech R&D funds.

OTOH for technologies like AGI where the main need is innovation
rather than expensive infrastructure, I think a key role for the
military is less likely.  I would expect the US military to be among
the leaders in robotics, because robotics is
costly-infrastructure-centric.  But not necessarily in robot
*cognition* (as opposed to hardware) because cognition R&D is more
innovation-centric.

Not that I'm saying the US military is incapable of innovation, just
that it seems to be more reliable as a source of development $$ for
technologies not yet mature enough to attract commercial investment,
than as a source for innovative ideas.

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[singularity] Re: [agi] Motivational Systems that are stable

2006-10-27 Thread Ben Goertzel

Richard,

As I see it, in this long message you have given a conceptual sketch
of an AI design including a motivational subsystem and a cognitive
subsystem, connected via a complex network of continually adapting
connections.  You've discussed the way such a system can potentially
build up a self-model involving empathy and a high level of awareness,
and stability, etc.

All this makes sense, conceptually; though as you point out, the story
you give is short on details, and I'm not so sure you really know how
to "cash it out" in terms of mechanisms that will actually function
with adequate intelligence ... but that's another story...

However, you have given no argument as to why the failure of this kind
of architecture to be stably Friendly is so ASTOUNDINGLY UNLIKELY as
you claimed in your original email.  You have just argued why it's
plausible to believe such a system would probably have a stable goal
system.  As I see it, you did not come close to proving your original
claim, that


>> > The motivational system of some types of AI (the types you would
>> > classify as tainted by complexity) can be made so reliable that the
>> > likelihood of them becoming unfriendly would be similar to the
>> > likelihood of the molecules of an Ideal Gas suddenly deciding to split
>> > into two groups and head for opposite ends of their container.


I don't understand how this extreme level of reliability would be
achieved, in your design.

Rather, it seems to me that the reliance on complex, self-organizing
dynamics makes some degree of indeterminacy in the system almost
inevitable, thus making the system less than absolutely reliable.
Illustratng this point, humans (who are complex dynamical systems) are
certainly NOT reliable in terms of Friendliness or any other subtle
psychological property...

-- Ben G







On 10/25/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:

Ben Goertzel wrote:
> Loosemore wrote:
>> > The motivational system of some types of AI (the types you would
>> > classify as tainted by complexity) can be made so reliable that the
>> > likelihood of them becoming unfriendly would be similar to the
>> > likelihood of the molecules of an Ideal Gas suddenly deciding to split
>> > into two groups and head for opposite ends of their container.
>
> Wow!  This is a vey strong hypothesis  I really doubt this
> kind of certainty is possible for any AI with radically increasing
> intelligence ... let alone a complex-system-type AI with highly
> indeterminate internals...
>
> I don't expect you to have a proof for this assertion, but do you have
> an argument at all?
>
> ben

Ben,

You are being overdramatic here.

But since you ask, here is the argument/proof.

As usual, I am required to compress complex ideas into a terse piece of
text, but for anyone who can follow and fill in the gaps for themselves,
here it is.  Oh, and btw, for anyone who is scarified by the
psychological-sounding terms, don't worry:  these could all be cashed
out in mechanism-specific detail if I could be bothered  --  it is just
that for a cognitive AI person like myself, it is such a PITB to have to
avoid such language just for the sake of political correctness.

You can build such a motivational system by controlling the system's
agenda by diffuse connections into the thinking component that controls
what it wants to do.

This set of diffuse connections will govern the ways that the system
gets 'pleasure' --  and what this means is, the thinking mechanism is
driven by dynamic relaxation, and the 'direction' of that relaxation
pressure is what defines the things that the system considers
'pleasurable'.  There would likely be several sources of pleasure, not
just one, but the overall idea is that the system always tries to
maximize this pleasure, but the only way it can do this is to engage in
activities or thoughts that stimulate the diffuse channels that go back
from the thinking component to the motivational system.

[Here is a crude analogy:  the thinking part of the system is like a
table ontaining a complicated model landscape, on which a ball bearing
is rolling around (the attentional focus).  The motivational system
controls this situation, not be micromanaging the movements of the ball
bearing, but by tilting the table in one direction or another.  Need to
pee right now?  That's because the table is tilted in the direction of
thoughts about water, and urinary relief.  You are being flooded with
images of the pleasure you would get if you went for a visit, and also
the thoughts and actions that normally give you pleasure are being
disrupted and associated with unpleasant thoughts of future increased
bladder-agony.  You get the idea.]

The diffuse channels are set up in such a way that they grow from seed

Re: Re: [singularity] Motivational Systems that are stable

2006-10-27 Thread Ben Goertzel

Hi Richard,

I have left that email sitting in my Inbox, and skimmed it over, but
did not find time to read it carefully and respond to it yet.  I only
budget myself a certain amount of time per day for recreational
emailing (and have been exceeding that limit this week, already ;-)
  I hope to find time to read/respond this weekend.

Ben G

On 10/27/06, Richard Loosemore <[EMAIL PROTECTED]> wrote:


Curious.

A couple of days ago, I responded to demands that I produce arguments to
justify the conclusion that there were ways to build a friendly AI that
was extremely stable and trustworthy, but without having to give a
mathematical proof of its friendliness.

Now, granted, the text was complex, technical, and not necessarily
worded as best it could be.  But the background to this is that I am
writing a long work on the foundations of cognitive science, and the
ideas in that post were a condensed version of material that is spread
out over several dense chapters in that book ... but even though that
longer version is not ready, I finally gave in to the repeated (and
sometimes shrill and abusive) demands that I produce at least some kind
of summary of what is in those chapters.

But after all that complaining, I gave the first outline of an actual
technique for guaranteeing Friendliness (not vague promises that a
rigorous mathematical proof is urgently needed, and "I promise I am
working on it", but an actual method that can be developed into a
complete solution), and the response was  nothing.

I presume this means everyone agrees with it, so this is a milestone of
mutual accord in a hitherto divided community.

Progress!



Richard Loosemore.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [singularity] Defining the Singularity

2006-10-26 Thread Ben Goertzel

HI,

About hybrid/integrative architecturs, Michael Wilson said:

I'd agree that it looks good when you first start attacking the problem.
Classic ANNs have some demonstrated competencies, classic symbolic
AI has some different demonstrated competencies, as do humans and
existing non-AI software. I was all for hybridising various forms of
connectionism, fuzzy symbolic logic, genetic algorithms and more at one
point. It was only later that I began to realise that most if not all of
those mechanisms were neither optimal, adequate or even all that useful.


My own experience was along similar lines.

The Webmind AI Engine that I worked on in the late 90's was a "hybrid
architecture," that incorporated learning/reasoning/etc. agents based
on a variety of existing AI methods, moderately lightly customized.

On the other hand, the various cognitive mechanisms in Novamente
mostly had their roots in "standard" AI techniques, but have been
modified, customized and re-thought so far that they are really
fundamentally different things by now.

So I did find that even when a standard narrow-AI technique sounds on
the surface like it should be good at playing some role within an AGI
architecture, in practice it generally doesn't work out that way.
Often there is **something vaguely like** that narrow-AI technique
that makes sense in an AGI architecture, but the path from the
narrow-AI method to the AGI-friendly relative can require years of
theoretical and experimental effort.

An example is the path from evolutionary learning to "probabilistic
evolutionary learning" of the type we've designed for Novamente (which
is hinted at in Moshe Looks' thesis work at www.metacog.org; but even
that stuff is only halfway there to the kind of prob. ev. learning
needed for Novamente AGI purposes; it hits some of the key points but
leaves some important things out too.  But a key point is that by
using probabilistic methods effectively it opens the door for deep
integration of evolutionary learning and probabilistic reasoning,
which is not really possible with standard evolutionary techniques...)

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: Re: Re: [singularity] Kurzweil vs. de Garis - You Vote!

2006-10-24 Thread Ben Goertzel

 Right - for the record when I use words like "loony" in this sort of
context I'm not commenting on how someone might come across face to face
(never having met him), nor on what a psychiatrist's report would read (not
being a psychiatrist) - I'm using the word in exactly the same way that I
would call someone loony for believing the Book of Revelations will
literally come true at the end of the Mayan calendar, that they've been
called to make a spiritual rendezvous with a flying saucer following a
comet, etc.


Ah, OK.  In that sense, I believe at least 90% of the world's
population is loony, because they believe in God -- which is a far
more fanciful and less probable notion than De Garis's "artilect war"
;-O

In fact, the US is a nation ruled by a combination of loonies and
loony-impersonators, since there are no (or nearly no) nationally
elected officials who are admitted atheists...


> Do you think that De Garis's scenario of a massive violent conflict
> between pro and anti Singularity forces is not plausible?
>

 When was the last time you saw ten geeks marching in formation, let alone
ten million? Seriously, there's a better chance of massive violent conflict
between likers of chocolate versus strawberry ice cream.


**Seriously**, I definitely don't agree with your last statement ;-)

And, I don't think you're trying very hard to understand how such a
war could viably come about.

Suppose that some clever scientist figures out how to construct
molecular assemblers with the capability to enable the construction of
massively powerful weapons  ... as well as all sorts of other nice
technologies...

Suppose Country A decides to ban this nanotech because of the dangers;
but Country B chooses not to ban it, because of the benefits...

Now, suppose A decides the presence of this nanotech in B poses a
danger to A ...

So, A decides to bomb B's molecular-assembler-construction facilities...

But, unknown to A, perhaps B has already engineered some nasty pathogens...

Etc.

I'm not talking about a situation of Geeks versus Ludds carrying out
hand-to-hand combat in the streets... and neither is De Garis,
really...

Looking at the political situation in the world today, regarding
weapons of mass destruction and nuclear proliferation and so forth, I
don't find this kind of scenario all that farfetched --- if one
assumes a soft takeoff...

It is definitely not in the category of the chocolate versus
strawberry ice cream wars [or, as in Dr. Seuss's Butter Battle Book,
the war between the butter-side-up and butter-side-down
bread-butterers ... ]

-- Ben G

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: Re: [singularity] Kurzweil vs. de Garis - You Vote!

2006-10-24 Thread Ben Goertzel

On 10/24/06, Russell Wallace <[EMAIL PROTECTED]> wrote:

On 10/24/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
> I know Hugo de Garis pretty well personally, and I can tell you that
> he is certainly not "loony" on a personal level, as a human being.
> He's a bit eccentric, but he actually has a very solid understanding
> of the everyday world as well as of many branches of science  His
> "reality discrimination" faculty is excellent, which discriminates him
> from the loonies of the world...  He is however a bit of a showman --
> it can be striking how his persona changes when he shifts from private
> conversation to speaking on-camera or in front of a crowd...

 This is one occasion on which I must agree with Eliezer that you're just a
bit too charitable :P


Well, I don't want to have a public debate about a friend's psyche ;-) ...

Anyway, "loony" is not a very precisely defined term, in general.  It
can refer either to social behaviors or to underlying cognitions, for
example -- and there are many different cognitive patterns that can
lead to apparently "loony" social behaviors  To be honest, I
thought Hugo was a bit loony based on some of his public
presentations, before I got to know him well in person and understood
his patterns of thinking

This is not to say, however, that I fully agree with either his
specific futurist predictions or his judgment of the viability of
various paths to AGI.


> -- Kurzweil: ultratech becomes part of all of our lives, so much that
> we take it for granted, and the transition from the human to posthuman
> era is seamless

 That scenario I think is plausible, though I don't share Kurzweil's
optimism regarding either its inevitability or imminence.


Do you think that De Garis's scenario of a massive violent conflict
between pro and anti Singularity forces is not plausible?

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [singularity] Kurzweil vs. de Garis - You Vote!

2006-10-24 Thread Ben Goertzel

About...

On 10/24/06, Hank Conn <[EMAIL PROTECTED]> wrote:

About de Garis... I feel the same as others on this list have
expressed... in that he is definitively loony.

I also have very strong doubts about Kurzweil's model of how the Singularity
is going to unfold.


I know Hugo de Garis pretty well personally, and I can tell you that
he is certainly not "loony" on a personal level, as a human being.
He's a bit eccentric, but he actually has a very solid understanding
of the everyday world as well as of many branches of science  His
"reality discrimination" faculty is excellent, which discriminates him
from the loonies of the world...  He is however a bit of a showman --
it can be striking how his persona changes when he shifts from private
conversation to speaking on-camera or in front of a crowd...

Regarding his prognostications in the Artilect War book, I don't agree
with the confidence with which he puts them forth; but, I also think
Ray Kurzweil is sometimes highly overconfident regarding his
predictions.

Both de Garis and Kurzweil share an intuition that the Singularity
will arise via a "soft takeoff" scenario.  They each argue for a
different possible consequence of the soft takeoff,

-- Kurzweil: ultratech becomes part of all of our lives, so much that
we take it for granted, and the transition from the human to posthuman
era is seamless

-- De Garis: ultratech polarizes society, with some humans embracing
it and others rejecting it, and a massive war ensues

So far as I can tell, conditional on the hypothesis of a soft takeoff,
both possibilities are palpable, and I don't know how to estimate the
odds of either one.

Further, both De Garis and Kurzweil argue that AGI is likely to be
achieved, first, through human brain emulation.  [De Garis is actively
working to help with this, by creating firmware platforms for
large-scale neural net emulation; whereas Kurzweil is not actively
involved in research in this area right now so far as I know.]

Equally interesting is the debate btw soft and hard takeoff, but, this
is not brought up by the De Garis vs. Kurzweil contrast, as they both
agree on this point.

-- Ben G






Here's what Eliezer had to say 4 years ago about Kurzweil... (I imagine this
is horribly obsolete in many ways, like everything else. Especially given
that Kurzweil donated something like 15 grand to SIAI a while back).
http://www.sl4.org/archive/0206/4015.html

I also think if you are expecting the Singularity in 2029 or after, you
might be in for quite an early surprise.

Ugh.. the poll on the website says "Whose vision do you believe: Kurzweil or
de Garis?" ... lol


Interesting news though.


On 10/24/06, Ben Goertzel <[EMAIL PROTECTED]> wrote:
>
>
http://www.bbc.co.uk/sn/tvradio/programmes/horizon/broadband/tx/singularity/
>
> Tuesday 24 October 2006, 9pm on BBC Two
>
> "Meet the scientific prophets who claim we are on the verge of
> creating a new type of human - a human v2.0.
>
> "It's predicted that by 2029 computer intelligence will equal the
> power of the human brain. Some believe this will revolutionise
> humanity - we will be able to download our minds to computers
> extending our lives indefinitely. Others fear this will lead to
> oblivion by giving rise to destructive ultra intelligent machines.
>
> "One thing they all agree on is that the coming of this moment - and
> whatever it brings - is inevitable."
>
> -
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
>
http://v2.listbox.com/member/[EMAIL PROTECTED]
>

 
 This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[singularity] Kurzweil vs. de Garis - You Vote!

2006-10-24 Thread Ben Goertzel

http://www.bbc.co.uk/sn/tvradio/programmes/horizon/broadband/tx/singularity/

Tuesday 24 October 2006, 9pm on BBC Two

"Meet the scientific prophets who claim we are on the verge of
creating a new type of human - a human v2.0.

"It's predicted that by 2029 computer intelligence will equal the
power of the human brain. Some believe this will revolutionise
humanity - we will be able to download our minds to computers
extending our lives indefinitely. Others fear this will lead to
oblivion by giving rise to destructive ultra intelligent machines.

"One thing they all agree on is that the coming of this moment - and
whatever it brings - is inevitable."

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: Re: [singularity] Defining the Singularity

2006-10-24 Thread Ben Goertzel

Loosemore wrote:

> The motivational system of some types of AI (the types you would
> classify as tainted by complexity) can be made so reliable that the
> likelihood of them becoming unfriendly would be similar to the
> likelihood of the molecules of an Ideal Gas suddenly deciding to split
> into two groups and head for opposite ends of their container.


Wow!  This is a vey strong hypothesis  I really doubt this
kind of certainty is possible for any AI with radically increasing
intelligence ... let alone a complex-system-type AI with highly
indeterminate internals...

I don't expect you to have a proof for this assertion, but do you have
an argument at all?

ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-23 Thread Ben Goertzel
Though I have remained often-publiclyopposed to emergence and 'fuzzy' design since first realising what the true
consequences (of the heavily enhanced-GA-based system I was workingon at the time) were, as far as I know I haven't made that particularmistake again.Whereas, my view is that it is precisely the effective combination of probabilistic logic with complex systems science (including the notion of emergence) that will lead to, finally, a coherent and useful theoretical framework for designing and analyzing AGI systems...
I am also interested in creating a fundamental theoretical framework for AGI, but am pursuing this on the backburner in parallel with practical work on Novamente (even tho I personally find theoretical work more fun...).  I find that in working on the theoretical framework it is very helpful to proceed in the context of a well-fleshed-out practical design...
-- Ben G

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-23 Thread Ben Goertzel
Hi, > Ditto with just about anything else that's at all innovative -- 
e.g. was> Einstein's General Relativity a fundamental new breakthrough, or just a> tweak on prior insights by Riemann and Hilbert?I wonder if this is a sublime form of irony for a horribly naïve and
arrogant analogy to GR I drew on SL4 some years back :) Yes, I do remember that entertaining comment of yours, way back when... ;)  ... I assume you have backpedaled on that particular assertion by now, though...
ben 

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] AGI funding: US versus China

2006-10-23 Thread Ben Goertzel
Indeed, there are obvious political problems related to doing AGI work in China, and also obvious (but not foolproof) ways to manage them  But I suppose this is not the most apropos discussion topic for the list...
BenOn 10/23/06, Josh Treadwell <[EMAIL PROTECTED]> wrote:



  
  


This is a big problem.  If China was a free nation, I wouldn't have any
qualms with it, but the first thing China will do with AGI is
marginalize human rights.  Any nation who censors it's internet
(violators are sent to prisoner/slave camps) and sells organs of
unwilling executed prisoners (more are executed each year in china than
the entire world combined) is not a place I'd like AGI to be
developed.  I hope Hugo doesn't regret his decision.  You also have to
watch out for copyright violation.  They're not going to care if your
project gets stolen, and if you announce that you're close to finishing
your project, I'd have guards posted in the server room.  Things could
get scary really quickly.  





Josh Treadwell


Ben Goertzel wrote:

Hi,
  
As a contrast to this discussion on why AGI is hard to fund in the US,
I note that Hugo de Garis has recently relocated to China, where he was
given a professorship and immediately given the "use" of basically as
many expert programmers/researchers as he can handle.
  
  
Furthermore, I have strong reason to believe I could secure a similar
position with very little effort...
  
So, if I just decided to relocate the Novamente project to China, all
of a sudden I could have a couple dozen AI scientists fully funded to
work on the project.  Very simple: no more trying to convince investors
or government funding agencies, no more need to do narrow-AI consulting
to make $$ to feed AGI programmers, etc.
  
  
I point this out to indicate that the difficulty of funding AGI
develoment  is NOT some kind of inevitability related to the perceived
speculative nature of the work -- it is a consequence of the way our
own society and economy is structured, and the specific cultural
history of the US and Europe.
  
  
To be honest, I have been mulling over this China possibility a fair
bit lately, but am  held back from taking the leap and relocating to
China by a couple factors:
  
1) my primary narrow-AI project, in the financial domain, appears at
the moment to have nontrivial odds of making me wealthy enough to fund
Novamente development myself, within a couple years
  
  
2) I share custody of my kids 50-50 with my ex-wife who lives in
Maryland, and doing shared custody from China would be trickier...
  
Then of course there are various IP and AGI safety issues related to
the Chinese government, but I'd rather not go into those at the moment
;-)
  
  
But it is quite interesting to reflect that, simply by relocating
physically to a different part of the planet and taking a job at a
university there, these funding issues would effectively VANISH all at
once, as they have for Hugo de Garis.  All of a sudden, within say 6
months of my relocation, Novamente could start progressing toward
powerful AGI at five times its current speed.
  
  
Because, Chinese society right now is willing to take risks on AGI
development that is perceived as speculative, whereas with rare
exceptions, US and European society are not.
  
-- Ben G
  
  
  
  
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
  http://v2.listbox.com/member/[EMAIL PROTECTED]
  
-- 
This message has been scanned for viruses and
  
dangerous content by
  MailScanner, and is
  
believed to be clean.


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

-- 
This message has been scanned for viruses and
dangerous content by
MailScanner, and is
believed to be clean.



This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[singularity] AGI funding: US versus China

2006-10-23 Thread Ben Goertzel
Hi,
As a contrast to this discussion on why AGI is hard to fund in the US, I note that Hugo de Garis has recently relocated to China, where he was given a professorship and immediately given the "use" of basically as many expert programmers/researchers as he can handle.
Furthermore, I have strong reason to believe I could secure a similar position with very little effort...So, if I just decided to relocate the Novamente project to China, all of a sudden I could have a couple dozen AI scientists fully funded to work on the project.  Very simple: no more trying to convince investors or government funding agencies, no more need to do narrow-AI consulting to make $$ to feed AGI programmers, etc.
I point this out to indicate that the difficulty of funding AGI develoment  is NOT some kind of inevitability related to the perceived speculative nature of the work -- it is a consequence of the way our own society and economy is structured, and the specific cultural history of the US and Europe.
To be honest, I have been mulling over this China possibility a fair bit lately, but am  held back from taking the leap and relocating to China by a couple factors:1) my primary narrow-AI project, in the financial domain, appears at the moment to have nontrivial odds of making me wealthy enough to fund Novamente development myself, within a couple years
2) I share custody of my kids 50-50 with my ex-wife who lives in Maryland, and doing shared custody from China would be trickier...Then of course there are various IP and AGI safety issues related to the Chinese government, but I'd rather not go into those at the moment ;-)
But it is quite interesting to reflect that, simply by relocating physically to a different part of the planet and taking a job at a university there, these funding issues would effectively VANISH all at once, as they have for Hugo de Garis.  All of a sudden, within say 6 months of my relocation, Novamente could start progressing toward powerful AGI at five times its current speed.
Because, Chinese society right now is willing to take risks on AGI development that is perceived as speculative, whereas with rare exceptions, US and European society are not.-- Ben G


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-23 Thread Ben Goertzel
Michael,I think your summary of the situation is in many respects accurate; but, an interesting aspect you don't mention has to do with the disclosure of technical details...In the case of Novamente, we have sufficient academic credibility and know-how that we could easily publish a raft of journal papers on the details of Novamente's design and preliminary experimentation.  With this behind us, it would not be hard for us to get a moderate-sized team of somewhat-prestigious academic AI researchers on board ... and then we could almost surely raise funding from conventional government research funding sources.  This process would take a number of years but is a well-understood process and would be very likely to succeed.
The main problem then boils down to the Friendliness issue.  Do we really want to put a set of detailed scientific ideas, some validated via software work already, that we believe are capable of dramatically accelerating progress toward AGI, into the public domain?  Perhaps it is rational to do so, on the grounds that we will be able to progress more rapidly toward AGI than anyone else with the funding that this disclosure will bring, even if others have exposure to our basic concepts.  But I have not reached the point of deciding so, yet
As for your distinction of a "fundamental innovation" versus a "combination of prior ideas," I find that is largely a matter of marketing.  I could easily spin Novamente as a fundamental, radical innovative design OR as an integrative combination of prior ideas.  Ditto with Eliezer's ideas.  Ditto with just about anything else that's at all innovative -- 
e.g. was Einstein's General Relativity a fundamental new breakthrough, or just a tweak on prior insights by Riemann and Hilbert?  Was Special Relativity a radical breakthrough, or just a tweak on Lorentz and Poincare'?  I don't really find assessments of "perceived radicalness" nearly as interesting as assessments of perceived feasibility ;-)
Finally: although progress on the Novamente project right now is slower than I would like, we do have 2 full-time experienced AI engineers on the AGI project, plus one new full-time addition and the part-time efforts of several PhD AI scientists.  So, we are slowly but surely moving toward a Novamente version with sufficiently impressive functionality to be more effective at attracting funding via what it can do, rather than what we argue it will be able to do  We are going to get there ... it's just a drag to be getting there so much more slowly than necessary due to sociological issues related to funding.
-- BenOn 10/23/06, Starglider <[EMAIL PROTECTED]> wrote:
On 22 Oct 2006 at 17:22, Samantha Atkins wrote:>ItisaloteasierIimaginetofindmanypeoplewillingandableto>donateontheorderof$100/monthindefinitelytosuchacausethanto>findoneorafewpeopletoputuptheentireamount. Iamsurethathas
>alreadybeenkickedaround.Whywouldn'titworkthough?There have been many, many well funded AGI projects in the past, publicand private. Most of them didn't produce anything useful at all. A fewmanaged some narrow AI spinoffs. Most of the directors of those projects
were just as confident about success as Ben and Peter are. All of themwere wrong. No-one on this list has produced any evidence (publically) thatthey can succeed where all previous attempts failed other than cute
powerpoint slides - which all the previous projects had too. All you can dojudge architecture by the vauge descriptions given, and the history of AIstrongly suggests that even when full details are available, even so-called
experts completely suck at judging what will work and what won't. Thechances of arbitrary donors correctly ascertaining what approaches willwork are effectively zero. The usual strategy is to judge by hot buzzword
count and apparent project credibility (number of PhDs, papers publishedby leader, how cool the website and offices are, number of glowing writeupsin specialist press; remember Thinking Machines Corp?). Needless to say,
this doesn't have a good track record either.As far as I can see, there are only two good reasons to throw funding at aspecific AGI project you're not actually involved in (ignoring the criticalFAI problem for a moment); hard evidence that the software in question can
produce intelligent behaviour significantly in advance of the state of theart, or a genuinely novel attack on the problem - not just a new mix of AIconcepts in the architecture, /everyone/ vaguely credible has that, a
genuinely new methodology. Both of those have an expiry date after a fewyears with no further progress. I'd say the SIAI had a genuinely newmethodology with the whole provable-FAI idea and to a lesser extent some
of the nonpublished Bayesian AGI stuff that immediately followed LOGI,but I admit that they may well be past the 'no useful further results'expiry date for continued support from strangers.Setting up a structure that can handle the funding is a secondary issue.
It's nontrivial, but it'

Re: [singularity] Defining the Singularity

2006-10-23 Thread Ben Goertzel
I think Mark's observation is correct.  Anti-aging is far easier to fund than AGI because there are a lot more people interested in preserving their own lives than in creating AGI  Furthermore, the M-prize money is to fund a **prize**, not directly to fund research on some particular project  M-prize money is surely worthwhile, but is a different sort of thing...
BenOn 10/22/06, Mark Nuzzolilo II <[EMAIL PROTECTED]> wrote:
>Well, there is funding like in the Methuselah Mouse project.  I am one of>"the 300" myself.   With enough interested >people it should not be that>hard to raise $5 million even on a very long term project.  Most of us seem
>to think that >conquering aging will take longer than AGI but there are>fairly successful funding efforts in that space.   It is a lot easier >I>imagine to find many people willing and able to donate on the order of
>$100/month indefinitely to such a cause than >to find one or a few people>to put up the entire amount.>I am sure that has already been kicked around.  Why wouldn't it work>though?
You can't just snap your fingers and raise $5 million for a cause with evenless public support than anti-aging research, whether you have 1 person with$5 million dollars, or 4,167 people with $1200 a year.  I fail to see how
the problem would be simplified in this way.  I doubt any AGI company could,at this point, find thousands of people willing to give even $10/month, letalone $100.  But that doesn't mean that it won't be possible in a few years.
AGI could, at any time, receive the funding and publicity thatnanotechnology has seen especially since the late 1990s.-This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:http://v2.listbox.com/member/[EMAIL PROTECTED]


This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-22 Thread Ben Goertzel
Hi, I know you must be frustrated with fund raising, but investor
relunctance is understandable from the perspective that for decadesnow there has always been someone who said we're N years from fullblown AI, and then N years passed with nothing but narrow AI progress.Of course, someone will end up being right at some point.
Sure ... and most of the time, the narrow AI progress achieved via AI-directed funding has not even been significant, or usefulHowever, it seems to me that the degree of skepticism about AGI goes beyond what is rational.  I attribute this to an unconscious reluctance on the part of most humans to conceive that **we**, the mighty and glorious human rulers of the Earth, could really be superseded by mere software programs created by mere mortal humans.  Even humans who are willing to accept this theoretically, don't want to accept this pragmatically, as something that may occur in the near term.
After all, there seems to be a lot more cash around for nanotech than for AGI, and that is quite unproven technology also -- and technology that is a hell of a lot riskier and more expensive to develop than AGI software.  It is not the case that investors are across the board equally skeptical of all unproven technologies -- AI seems to be viewed with an extra, and undeserved, degree of skepticism. 
For the record, at the same event, Peter Voss of Adaptive AI(
http://www.adaptiveai.com/) stated his company would have AGI in 2years. I *think* he qualified it as being at the level of a 10 yearold child. Help me out on that, if you remember.I could help you out, but I won't, because I believe Peter asked those of us at that meeting **not** to publicly discuss the details of his presentation there (although, frankly, the details were pretty scanty).  If he wants to chip in some more info himself, he is welcome to...
Peter has been more successful than Novamente has at fundraising, during the last couple years.  I take my hat off to him for his marketing prowess.  I also note that he is a lot more experienced than me on the business marketing side ... Novamente LLC is chock full of brilliant techie futurists, but we are not sufficiently staffed in terms of marketing and sales wizardry.
I have my disagreements with Peter's approach to AGI, inasmuch as I understand it (I know the general gist of his architecture but not the nitty-gritty details).  However, I don't want to get into that in detail on this list, for fear of disclosing aspects of Peter's work that he may not want disclosed.  My basic issue is that I do not, based on what I know of it, see why his architecture will be capable of representing and learning complex knowledge.  I am afraid his knowledge representation and learning mechanisms may be overfitted, to an extent, to early-stage "infantile" type learning tasks.  Novamente is more complex than his system, and thus getting it to master infantile learning may be a little trickier than with his system (this is one thing we're working on now ... and of course I can't make any confident comparisons because I have never worked with Peter's system and also what I do know about it is quite out-of-date), but Novamente is designed from the start to be able to deal with complex reasoning such as mathematics and science, and so once the infantile stage is surpassed, I expect progress to be EXTREMELY rapid.
Having summarized very briefly some of my technical concerns about Peter's approach, I must add that I respect his general thinking about AI very much, and admire his enthusiasm and focus at pursuing the AGI goal.  I hope his approach **does** succeed, as I think he would be a responsible and competent "AGI daddy" -- however, based on what I know, I do think that Novamente has far higher odds of success...
-- Ben

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-22 Thread Ben Goertzel
Japan, despite a lot of interest back in 5th Generation computer days seems to have a difficult time innovating in advanced software.  I am not sure why.  
I talked recently, at an academic conference, with the guy who directs robotics research labs within ATR, the primary Japanese government research lab.He said that at the moment the "powers that be" there are not interested in funding cognitive robotics. 
So how do we get you and your team the necessary funding ASAP to complete your work?  I don't know the legal issues involved but a bunch of very interested fans of Singularity could quite possibly put together the $5 million or so I think you last said you needed pretty quickly.   This was brought up quite some time ago, by me at least, and at the time I think I recall you saying that the right structure wasn't in place to accept such funding.  What is that structure and what is in the way of setting it up?
Well, $5M would be great and is a fair estimate of what I think it would take to create Singularity based on further developing the current Novamente technology and design.
However, it is quite likely sensible to take an incremental approach.  For instance, if we were able to raise $500K right now, then during the course of a year we could develop rather impressive demonstrations of Novamente proto-AGI technology, which would make raising the rest of the money easier.
The structure is indeed in place to accept such funding: Novamente LLC, which is a Delaware corporation that owns the IP of the Novamente AI Engine, and is currently operating largely as an AI consulting company (with a handful of staff in Brazil, as well as me here in Maryland and Bruce Klein in San Francisco and Ari Heljakka in Finland).  However, Novamente LLC is currently paying 
2.5 programmers to work full-time toward AGI (not counting the portion of my time that is thus expended).  But alas, this is not enough to get us there very fast...If for some reason a major funding source preferred to fund an AGI project in a nonprofit context, we also have AGIRI, a Delaware nonprofit corporation.  I am not committed to doing the Novamente AI Engine in a for-profit context, although that currently seems to me to be the most rational choice.  My current feeling is that I would only be willing to take it nonprofit in the context of a very significant donation (say $3M+, not just $500K), because of a fear that follow-up significant nonprofit donations might be difficult to come by, but this attitude may be subject to change.  
Bruce Klein has been leading a fundraising effort for nearly a year now with relatively success.  To be honest, we are at the point of putting "raising funds explicitly for building AGI" on the backburner now, and focusing on "raising funds for commercial projects that will pay for the development of various components of the AGI, and if they succeed big-time will make us rich enough to pay for development of the AGI in a more direct and focused way."  Which is rather frustrating, because if we had a decent amount of funding we could progress much more rapidly and directly toward the end goal of an ethically positive AGI system created based on the Novamente architecture.
The main issue that potential investors/donors seem to have may be summarized in the phrase "perceived technology risk."  In other words: We have not been able to convince anyone with a lot of money that there is a reasonable chance we can actually succeed in creating an AGI in less than a couple decades.  Potential investors/donors see that we are a team of very smart people with some very sophisticated and complex ideas about AGI, and a strong knowledge of the AI, computer and cognitive science fields -- but they cannot understand the details of the Novamente system (which is not surprising since new Novamente team members take at least 6 months to really "get it"), and thus cannot make any real assessment of our odds of success, so they just assume our odds of success are low.
As an example, in a conversation over dinner with a wealthy individual and potential investor in LA two weeks ago, I was asked: Him: "But still, I can't understand why you haven't found investment money
yet.  I mean, it should be obvious to potential investors that, if you
succeed, the potential rewards are incredible."
Me: "Yes, that's obvious to everyone."

"So the problem is that no one believes you can really do it."

"Yes.  Their estimates of our odds of success are apparently very low.""Well, how can I know if you yourself really believe that you can create an AGI in a feasible amount of time.   You claim you can create a human-level AI in  four years... but how can I believe you?  How do I know you're not just making that up in order to get research money to play with?"
My reply was: "Well look, there are two aspects.  There's engineering time, and then teaching time.  Engineering time is easier to estimate.  I'm quite confident that if I could just re-orient the N

Re: [singularity] Defining the Singularity

2006-10-22 Thread Ben Goertzel
Hi, Mike Deering wrote:
If you really were interested in working on the 
Singularity you would be designing your education plan around getting a job at 
the NSA.  The NSA has the budget, the technology, the skill set, and the 
motivation to build the Singularity.  Everyone else, universities, private 
companies, other governments, are lacking in some aspect compared to the 
NSA.  A close second is Japan.  They built robots that just lack a 
brain to be truly useful.  They build super computers.  They don't 
want to be number two in this race and they know it's a race, and they know who 
they are racing against.Well, if I am right, then I have one thing that the NSA and the Japanese government appear not to have: a workable design for an AGI ;-)I agree that both the US and Japanese governments are well poised to come up with effective AGI designs eventually ... but I don't agree that someone else (
e.g. my own team) may not get there first.Historically, to be sure, it has not always been those with the most resources who came up with the right innovation at the right time.  To cite one among very many examples, it was Tesla and not Edison who came up with AC power  Edison had the cash and the team of researchers, and for that matter he had an excellent and relevant track record ... but Tesla, in this instance, had the right insight...
-- Ben G

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Minds beyond the Singularity: literally self-less ?

2006-10-11 Thread Ben Goertzel

Hi,


In regard to your "finally" paragraph, I would speculate that advanced
intelligence would tend to converge on a structure of increasing
stability feeding on increasing diversity.  As the intelligence evolved,
a form of natural selection would guide its structural development, not
toward increasingly desirable ends, but toward increasingly effective
methods.


Yes, this is interesting, but I wonder what is the root of your
intuition about "increasing stability."

I don't see why increasing stability would necessarily be a feature of
an advanced, rapidly-changing intelligence...

thx
ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Minds beyond the Singularity: literally self-less ?

2006-10-11 Thread Ben Goertzel

Hi,

Well, your point is a good one, and a different one.

The specific qualities of an AGI's self will doubtless be very
different from that of a human being's.  This will depend not only on
its emotional makeup but also on the nature of its embodiment, for
example.  Much of the nature of human self is tied the localized
nature of our physical embodiment.  An AGI with a distributed
embodiment, with sensors and actuators all around the world or beyond,
would have a very different kind of self-model than any human  And
a human hooked into the Net with VR technology and able to sense and
act remotely via sensors and actuators all over the world, might also
develop a different flavor of self not so closely tied to localized
physical embodiment.

But all that is a different sort of point  My point was that an
AGI that was very rapidly undergoing a series of profound changes
might never develop a stable self-model at all, because as soon as the
model came about, it would be rendered irrelevant.

Imagine going through the amount of change in the human life course
(infant --> child --> teen --> young adult --> middle aged adult -->
old person) within, say, a couple days.  Your self model wouldn't
really have time to catch up.  You'd have no time to be a stable
"you."  Even if there were (as intended e.g. in Friendly AI designs) a
stable core of supergoals throughout all the changes

-- Ben G


On 10/11/06, Chris Norwood <[EMAIL PROTECTED]> wrote:

How much of our "selves" are driven by biological
processes that an AI would not have to begin with, for
example...fear? I would think that the AI's self would
be fundamentaly different to begin with due to this.
It may never have to modify itself to achieve the new
type of self that you are describing.

--- Ben Goertzel <[EMAIL PROTECTED]> wrote:

> In something I was writing today, for a
> semi-academic publication, I
> found myself inserting a paragraph about how
> unlikely it is that
> superhuman AI's after the Singularity will possess
> "selves" in
> anything like the sense that we humans do.
>
> It's a bit long and out of context, but the passage
> in which this
> paragraph occurred may be of some interest to some
> folks here  The
> last paragraph cited here is the one that mentions
> future AI's...
>
> -- Ben
>
> **
>
>
> "
> The "self" in the present context refers to the
> "phenomenal self"
> (Metzinger, XX) or "self-model" (Epstein, XX).  That
> is, the self is
> the model that a system builds internally,
> reflecting the patterns
> observed in the (external and internal) world that
> directly pertain to
> the system itself.  As is well known in everyday
> human life,
> self-models need not be completely accurate to be
> useful; and in the
> presence of certain psychological factors, a more
> accurate self-model
> may not necessarily be advantageous.  But a
> self-model that is too
> badly inaccurate will lead to a badly-functioning
> system that is
> unable to effectively act toward the achievement of
> its own goals.
>
> "
> The value of a self-model for any intelligent system
> carrying out
> embodied agentive cognition is obvious.  And beyond
> this, another
> primary use of the self is as a foundation for
> metaphors and analogies
> in various domains.  Patterns recognized pertaining
> the self are
> analogically extended to other entities.  In some
> cases this leads to
> conceptual pathologies, such as the
> anthropomorphization of trees,
> rocks and other such objects that one sees in some
> precivilized
> cultures.  But in other cases this kind of analogy
> leads to robust
> sorts of reasoning – for instance, in reading Lakoff
> and Nunez's (XX)
> intriguing explorations of the cognitive foundations
> of mathematics,
> it is pretty easy to see that most of the metaphors
> on which they
> hypothesize mathematics to be based, are grounded in
> the mind's
> conceptualization of itself as a spatiotemporally
> embedded entity,
> which in turn is predicated on the mind's having a
> conceptualization
> of itself (a self) in the first place.
>
> "
> A self-model can in many cases form a
> self-fulfilling prophecy (to
> make an obvious double-entendre'!).   Actions are
> generated based on
> one's model of what sorts of actions one can and/or
> should take; and
> the results of these actions are then incorporated
> into one's
> self-model.  If a self-model proves a generally bad
> guide to action
> selection, this may never be discovered, unless said
> self-model
> includes the knowledge that semi-random
> experi

[singularity] Minds beyond the Singularity: literally self-less ?

2006-10-10 Thread Ben Goertzel

In something I was writing today, for a semi-academic publication, I
found myself inserting a paragraph about how unlikely it is that
superhuman AI's after the Singularity will possess "selves" in
anything like the sense that we humans do.

It's a bit long and out of context, but the passage in which this
paragraph occurred may be of some interest to some folks here  The
last paragraph cited here is the one that mentions future AI's...

-- Ben

**


"
The "self" in the present context refers to the "phenomenal self"
(Metzinger, XX) or "self-model" (Epstein, XX).  That is, the self is
the model that a system builds internally, reflecting the patterns
observed in the (external and internal) world that directly pertain to
the system itself.  As is well known in everyday human life,
self-models need not be completely accurate to be useful; and in the
presence of certain psychological factors, a more accurate self-model
may not necessarily be advantageous.  But a self-model that is too
badly inaccurate will lead to a badly-functioning system that is
unable to effectively act toward the achievement of its own goals.

"
The value of a self-model for any intelligent system carrying out
embodied agentive cognition is obvious.  And beyond this, another
primary use of the self is as a foundation for metaphors and analogies
in various domains.  Patterns recognized pertaining the self are
analogically extended to other entities.  In some cases this leads to
conceptual pathologies, such as the anthropomorphization of trees,
rocks and other such objects that one sees in some precivilized
cultures.  But in other cases this kind of analogy leads to robust
sorts of reasoning – for instance, in reading Lakoff and Nunez's (XX)
intriguing explorations of the cognitive foundations of mathematics,
it is pretty easy to see that most of the metaphors on which they
hypothesize mathematics to be based, are grounded in the mind's
conceptualization of itself as a spatiotemporally embedded entity,
which in turn is predicated on the mind's having a conceptualization
of itself (a self) in the first place.

"
A self-model can in many cases form a self-fulfilling prophecy (to
make an obvious double-entendre'!).   Actions are generated based on
one's model of what sorts of actions one can and/or should take; and
the results of these actions are then incorporated into one's
self-model.  If a self-model proves a generally bad guide to action
selection, this may never be discovered, unless said self-model
includes the knowledge that semi-random experimentation is often
useful.

"
In what sense, then, may it be said that self is an attractor of
iterated forward-backward inference?  Backward inference infers the
self from observations of system behavior.  The system asks: What kind
of system might I be, in order to give rise to these behaviors that I
observe myself carrying out?   Based on asking itself this question,
it constructs a model of itself, i.e. it constructs a self.  Then,
this self guides the system's behavior: it builds new logical
relationships between its self-model and various other entities, in
order to guide its future actions oriented toward achieving its goals.
Based on the behaviors new induced via this constructive,
forward-inference activity, the system may then engage in backward
inference again and ask: What must I be now, in order to have carried
out these new actions?  And so on.

"
My hypothesis is that after repeated iterations of this sort, in
infancy, finally during early childhood a kind of self-reinforcing
attractor occurs, and we have a self-model that is resilient and
doesn't change dramatically when new instances of action- or
explanation-generation occur.   This is not strictly a mathematical
attractor, though, because over a long period of time the self may
well shift significantly.  But, for a mature self, many hundreds of
thousands or millions of forward-backward inference cycles may occur
before the self-model is dramatically modified.  For relatively long
periods of time, small changes within the context of the existing self
may suffice to allow the system to control itself intelligently.

"
Finally, it is interesting to speculate regarding how self may differ
in future AI systems as opposed to in humans.  The relative stability
we see in human selves may not exist in AI systems that can
self-improve and change more fundamentally and rapidly than humans
can.  There may be a situation in which, as soon as a system has
understood itself decently, it radically modifies itself and hence
violates its existing self-model.  Thus: intelligence without a
long-term stable self.  In this case the "attractor-ish" nature of the
self holds only over much shorter time scales than for human minds or
human-like minds.  But the alternating process of forward and backward
inference for self-construction is still critical, even though no
reasonably stable self-constituting attractor ever emerges.  The
psychology of

Re: [singularity] Defining the Singularity

2006-10-10 Thread Ben Goertzel

On the other hand (to add a little levity to the conversation), a very
avid 2012-ite I knew last year informed me that

"You should just mix eight ounces of Robitussin with eight ounces of
vodka and drink it fast -- you'll find your own private Singularity,
right there!!"

;-pp


On 10/10/06, Lúcio de Souza Coelho <[EMAIL PROTECTED]> wrote:

On 10/10/06, BillK <[EMAIL PROTECTED]> wrote:
(...)
> If next year a quad-core pc becomes a self-improving AI in a basement
> in Atlanta, then disappears a hour later into another dimension, then
> so far as the rest of the world is concerned, the Singularity never
> happened.
(...)

Yep, I also tend to think of the Singularity as some convergence of
new technologies (or even natural events, like the evolution of a new
human species) that completely changes the way the worlds works. Yet I
have to concede that is a rather vague and subjective definition.
Also, in this definition there will not be *the* Singularity, there
will be a lot of them; arguably there were Singularities in the past -
one could think of the Industrial Revolution or the Age of Discovery
as past Singularities, for instance, no matter how antique and low
tech they look from our "enlightened" point of view.

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-10 Thread Ben Goertzel

Indeed...

What we are running into here is simply the poverty of compact formal
definitions.

AI researchers long ago figured out that it's difficult to create a
compact formal definition of "chair" or "arch" or table...

Ditto for "Singularity", not surprisingly...

This doesn't mean compact definitions aren't useful in some contexts,
just that they should not be interpreted to fully capture the concepts
to which they are attached...

-- Ben G

On 10/10/06, BillK <[EMAIL PROTECTED]> wrote:

On 10/10/06, Ben Goertzel wrote:

>
> But from the perspective of deeper understanding, I don't see why it's
> critical to agree on a single definition, or that there be a compact
> and crisp definition.  It's a complex world and these are complex
> phenomena we're talking about, as yet dimly understood.
>

I would add that 'The Singularity' has to be a world-affecting event.

If next year a quad-core pc becomes a self-improving AI in a basement
in Atlanta, then disappears a hour later into another dimension, then
so far as the rest of the world is concerned, the Singularity never
happened.

BillK

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-10 Thread Ben Goertzel

Hank,

On 10/10/06, Hank Conn <[EMAIL PROTECTED]> wrote:

The all-encompassing definition of the Singularity is the point at which an
intelligence gains the ability to recursively self-improve the underlying
computational processes of its intelligence.


I already have that ability -- I'm just very slow at exercising it ;-)

Seriously: From a **marketing** perspective, I think it may be
sensible to boil the Singularity down to simplified definitions

But from the perspective of deeper understanding, I don't see why it's
critical to agree on a single definition, or that there be a compact
and crisp definition.  It's a complex world and these are complex
phenomena we're talking about, as yet dimly understood.

-- Ben G



On 10/10/06, Michael Anissimov <[EMAIL PROTECTED]> wrote:
> The Singularity definitions being presented here are incredibly
> confusing and contradictory.  If I were a newcomer to the community
> and saw this thread, I'd say that this word "Singularity" is so poorly
> defined, it's useless.  Everyone is talking past each other.  As Nick
> Hay has pointed out, the Singularity was originally defined as
> smarter-than-human intelligence, and I think that this definition
> remains the most relevant, concise, and resistant to
> misinterpretation.
>
> It's not about technological progress.  It's not about experiencing an
> artificial universe by being plugged into a computer. It's not about
> human intelligence merging with computing technology.  It's not about
> things changing so fast that we can't keep up, or the accretion of
> some threshold level of knowledge.  All of these things *might* indeed
> follow from a Singularity, but might not, making it important to
> distinguish between the likely *effects* of a Singularity and *what
> the Singularity actually is*.  The Singularity *actually is* the
> creation of smarter-than-human intelligence, but there are many
> speculative scenarios about what would happen thereafter as there are
> people who have heard about the idea.
>
> The number of completely incompatible Singularity definitions being
> tossed around on this list underscores the need for a return to the
> original, simple, and concise definition, which, in that it doesn't
> make a million and one side claims, is also the easiest to explain to
> those being exposed to the idea for the first time.  We have to define
> our terms to have a productive discussion, and the easiest way to
> define a contentious term is to make the definition as simple as
> possible.  The reason that so many in the intellectual community see
> Singularity discussion as garbage is because there is so little
> definitional consensus that it's close to impossible to determine
> what's actually being discussed.
>
> Smarter-than-human intelligence.  That's all.  Whether it's created
> through Artificial Intelligence, Brain-Computer Interfacing,
> neurosurgery, genetic engineering, or the fundamental particles making
> up my neurons quantum-tunneling into a smarter-than-human
> configuration - the Singularity is the point at which our ability to
> predict the future breaks down because a new character is introduced
> that is different from all prior characters in the human story.
>
> The creation of smarter-than-human intelligence is called "the
> Singularity" by analogy to a gravitational singularity, not a
> mathematical singularity.  Nothing actually goes to infinity.  In
> physics, our models of black hole spacetimes spit out infinities
> because they're fundamentally flawed, not because nature itself is
> actually producing infinities.  Any relationship between the term
> Singularity and the definition of singularity that means "the quality
> of being one of a kind" is coincidental.
>
> The analogy of our inability to predict the physics past the event
> horizon of a black hole with the creation of superintelligence is apt,
> because we know for a fact that our minds are conditioned, both
> genetically and experientially, to predict the actions of other human
> minds, not smarter-than-human minds.  We can't predict what a
> smarter-than-human mind would think or do, specifically.  But we can
> predict it in broad outlines - we can confidently say that a
> smarter-than-human intelligence will 1) be smarter-than-human (by
> definition), 2) have all the essential properties of an intelligence,
> including the ability to model the world, make predictions, synthesize
> data, formulate beliefs, etc., 3) have starting characteristics
> dictated by the method of its creation, 4) have initial motivations
> dictated by its prior, pre-superintelligent form, 5) not necessarily
> display characteristics similar to its human predecessors, and so on.
> We can predict that a superintelligence would likely be capable of
> putting a lot of optimization pressure behind its goals.
>
> The basic Singularity concept is incredibly mundane.  In the midst of
> all this futuristic excitement, we sometimes forget this.  A single
> ge

Re: [singularity] Defining the Singularity

2006-10-10 Thread Ben Goertzel

Hi,


The reason that so many in the intellectual community see
Singularity discussion as garbage is because there is so little
definitional consensus that it's close to impossible to determine
what's actually being discussed.


I doubt this...

I think the reason that Singularity discussion is disrespected is
that, no matter how you work the specifics of the definition, it all
seems science-fictional to most people...

and we Singularitarians are disrespected for taking sci-fi speculation
too seriously (instead of focusing on money and family, getting a
haircut and getting a real job, etc. etc. ;-)


The basic Singularity concept is incredibly mundane.  In the midst of
all this futuristic excitement, we sometimes forget this.  A single
genetically engineered child born with a substantially
smarter-than-human IQ would constitute a Singularity, because we would
have no ability to predict the specifics of what it would do, whereas
we have a much greater ability to predict the actions of typical
humans.


I think this is not necessarily true.

a)
A very superordinarily smart genetically engineered child could well
waste all its time playing five-dimensional chess, or World of
Warcraft for that matter ... it could then wind up being pretty easy
to predict.

Similarly, the emergence of an Einstein on a planet of human retards
(er, "differently intellectually advantaged" indviduals...) would not
necessarily be a Singularity type event for that planet.  The alien
Einstein might well just stay in the corner meditating and
theorizing

b)
I find it very unlikely, but I can imagine a Singularity scenario in
which there is strong nanotech plus a host of highly powerful narrow
AI programs, but no artificial general intelligence beyond the human
level.  This could result in massive transformations of reality as we
know it, at an incredibly rapid rate, yet with no superhuman
intelligence.  This would be a Kurzweilian Singularity, and whether it
would be a Vingean Singularity comes out to depend on the
particularities of how one disambiguates the natural language concept
of "intelligence"...


I happen to think that the emergence of superhuman, rapidly
self-improving AI **is** what is going to characterize the Singularity
... but, I don't agree that this is the only species of Singularity
worth talking or thinking about...

-- Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] i'm new

2006-10-09 Thread Ben Goertzel

Hi,

On 10/9/06, Bruce LaDuke <[EMAIL PROTECTED]> wrote:

Just a sidebar on the whole 2012 topic.

It's quite possible that singularity is **already here** as new knowledge
and that the only barrier is social acceptance.  Radical new knowledge is
historically created long before it is accepted by society or
institutionalized and that often outside the boudaries of the academic
establishment.


Singularity requires realized technology, not just understanding.

As it happens, I think I do understand how to create a superhuman AI,
but even if I'm right (which you are free to doubt, of course) this
knowledge in itself is just a potential rather than actual
Singularity

If the high likelihood of the coming of a Singularity were widely
accepted, then the expected time till the Singularity comes would
decrease a lot, because of increased financial and attentional
resources paid to Singularity-enabling technologies...

Ben

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


  1   2   >