Re: [singularity] Defining the Singularity

2006-10-22 Thread Starglider
Samantha Atkins wrote:
 Of late I feel a lot of despair because I see lots of brilliant people
 seemingly mired in endlessly rehashing what-ifs, arcane philosophical
 points and willing to put off actually creating greater than human
 intelligence and transhuman tech indefinitely until they can somehow
 prove to their and our quite limited intelligence that all will be well.

As far as I'm aware the only researcher taking this point of view ATM is
Eliezer Yudkowsky (and implicitly, his assistants). Everyone else with
the capability is proceeding full steam ahead (at least, to the extent
that resources permitt) with AGI development. I'm somewhat unusual in
that I'm proceeding with AGI component development, but I accept that
even if I'm successful I can't safely assemble those components before
someone comes up with a reasonably sound FAI scheme (and taking
moderately paranoid precautions against takeoff in the larger
subassemblies). Who other than Eliezer are you criticsing here?

 I see brilliant idealistic people who don't bother to admit or examine
 what evil is now bearing down on them and their dreams because they
 believe the singularity is near inevitable and will make everything all
 better in the sweet by and bye.

That's true, but not so much of an issue. We don't have to actually
solve these problems directly, and as I've said most researchers are
already working as fast as they can given current resources. As such
I don't think a fuller appreciation of what's currently wrong with the
world would make much difference.

Michael Wilson
Director of Research and Development
Bitphase AI Ltd - http://www.bitphase.com


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


[singularity] Singularity Definition

2006-10-22 Thread Bruce LaDuke
Here's my first draft of a comprehensive definition of singularity.  This is 
pretty quick and I want to get more explicit when I have more time, but I'd 
like to go ahead and get thoughts from this group on how to make it more 
accurate and comprehensive.


http://www.hyperadvance.com/wiki/index.php?title=Singularity

Kind Regards,

Bruce LaDuke
Managing Director

Instant Innovation, LLC
Indianapolis, IN
[EMAIL PROTECTED]
http://www.hyperadvance.com

_
Stay in touch with old friends and meet new ones with Windows Live Spaces 
http://clk.atdmt.com/MSN/go/msnnkwsp007001msn/direct/01/?href=http://spaces.live.com/spacesapi.aspx?wx_action=createwx_url=/friends.aspxmkt=en-us


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-22 Thread deering



All this talk about proving something before doing 
it is beside the point. We, as a species, as a government, as scientists, 
as individuals, never prove anything before we try it. We just 
don't. Think of the many examples of new stuff we have done. Have we 
proved any of them would be safe before we did them? No. We're not 
going to prove that the Singularity is safe or Friendly before we make 
one. I doubt if it's possible for a Singularity to be safe or Friendly, 
but I can't prove it. 

If you really were interested in working on the 
Singularity you would be designing your education plan around getting a job at 
the NSA. The NSA has the budget, the technology, the skill set, and the 
motivation to build the Singularity. Everyone else, universities, private 
companies, other governments,are lacking in some aspect compared to the 
NSA. A close second is Japan. They built robots that just lack a 
brain to be truly useful. They build super computers. They don't 
want to be number two in this race and they know it's a race, and they know who 
they are racing against.
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]



Re: [singularity] Defining the Singularity

2006-10-22 Thread Chuck Esterbrook

On 10/22/06, Ben Goertzel [EMAIL PROTECTED] wrote:

This particular potential investor is still thinking about it ... he's
currently on vacation and will discuss further when he gets back.  Of course
this was an unusual conversation due to the amputation theme (and the amount
of wine being consumed during the conversation, as it was over dinner rather
than in an office setting), but other than that it was pretty standard.
Skepticism about AI runs reallly deep, it seems.


Plus he was a pretty big dude!  :-)

At least if you get his funding he will also contribute inspiration.
Normally when a person says chop chop it means right away; quickly
but when *he* gets on the phone and says chop chop it will have an
extra layer of meaning!

(And hey, that was Orange County, not LA.)

I know you must be frustrated with fund raising, but investor
relunctance is understandable from the perspective that for decades
now there has always been someone who said we're N years from full
blown AI, and then N years passed with nothing but narrow AI progress.
Of course, someone will end up being right at some point.

For the record, at the same event, Peter Voss of Adaptive AI
(http://www.adaptiveai.com/) stated his company would have AGI in 2
years. I *think* he qualified it as being at the level of a 10 year
old child. Help me out on that, if you remember.

I've started saving up for my robot butler...

-Chuck

-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-22 Thread Ben Goertzel
Japan,despitealotofinterestbackin5thGenerationcomputerdaysseemstohaveadifficulttimeinnovatinginadvancedsoftware.Iamnotsurewhy.
I talked recently, at an academic conference, with the guy who directs robotics research labs within ATR, the primary Japanese government research lab.He said that at the moment the powers that be there are not interested in funding cognitive robotics. 
SohowdowegetyouandyourteamthenecessaryfundingASAPtocompleteyourwork?Idon'tknowthelegalissuesinvolvedbutabunchofveryinterestedfansofSingularitycouldquitepossiblyputtogetherthe$5millionorsoIthinkyoulastsaidyouneededprettyquickly.Thiswasbroughtupquitesometimeago,bymeatleast,andatthetimeIthinkIrecallyousayingthattherightstructurewasn'tinplacetoacceptsuchfunding.Whatisthatstructureandwhatisinthewayofsettingitup?
Well, $5M would be great and is a fair estimate of what I think it would take to create Singularity based on further developing the current Novamente technology and design.
However, it is quite likely sensible to take an incremental approach. For instance, if we were able to raise $500K right now, then during the course of a year we could develop rather impressive demonstrations of Novamente proto-AGI technology, which would make raising the rest of the money easier.
The structure is indeed in place to accept such funding: Novamente LLC, which is a Delaware corporation that owns the IP of the Novamente AI Engine, and is currently operating largely as an AI consulting company (with a handful of staff in Brazil, as well as me here in Maryland and Bruce Klein in San Francisco and Ari Heljakka in Finland). However, Novamente LLC is currently paying 
2.5 programmers to work full-time toward AGI (not counting the portion of my time that is thus expended). But alas, this is not enough to get us there very fast...If for some reason a major funding source preferred to fund an AGI project in a nonprofit context, we also have AGIRI, a Delaware nonprofit corporation. I am not committed to doing the Novamente AI Engine in a for-profit context, although that currently seems to me to be the most rational choice. My current feeling is that I would only be willing to take it nonprofit in the context of a very significant donation (say $3M+, not just $500K), because of a fear that follow-up significant nonprofit donations might be difficult to come by, but this attitude may be subject to change. 
Bruce Klein has been leading a fundraising effort for nearly a year now with relatively success. To be honest, we are at the point of putting raising funds explicitly for building AGI on the backburner now, and focusing on raising funds for commercial projects that will pay for the development of various components of the AGI, and if they succeed big-time will make us rich enough to pay for development of the AGI in a more direct and focused way. Which is rather frustrating, because if we had a decent amount of funding we could progress much more rapidly and directly toward the end goal of an ethically positive AGI system created based on the Novamente architecture.
The main issue that potential investors/donors seem to have may be summarized in the phrase perceived technology risk. In other words: We have not been able to convince anyone with a lot of money that there is a reasonable chance we can actually succeed in creating an AGI in less than a couple decades. Potential investors/donors see that we are a team of very smart people with some very sophisticated and complex ideas about AGI, and a strong knowledge of the AI, computer and cognitive science fields -- but they cannot understand the details of the Novamente system (which is not surprising since new Novamente team members take at least 6 months to really get it), and thus cannot make any real assessment of our odds of success, so they just assume our odds of success are low.
As an example, in a conversation over dinner with a wealthy individual and potential investor in LA two weeks ago, I was asked: Him: But still, I can't understand why you haven't found investment money
yet. I mean, it should be obvious to potential investors that, if you
succeed, the potential rewards are incredible.
Me: Yes, that's obvious to everyone.

So the problem is that no one believes you can really do it.

Yes. Their estimates of our odds of success are apparently very low.Well, how can I know if you yourself really believe that you can create an AGI in a feasible amount of time. You claim you can create a human-level AI in four years... but how can I believe you? How do I know you're not just making that up in order to get research money to play with?
My reply was: Well look, there are two aspects. There's engineering time, and then teaching time. Engineering time is easier to estimate. I'm quite confident that if I could just re-orient the Novamente LLC staff currently working on consulting projects to the AGI project, then we could finish engineering the Novamente system in 2-3 years time. It's complex, and 

Re: [singularity] Defining the Singularity

2006-10-22 Thread Ben Goertzel
Hi, I know you must be frustrated with fund raising, but investor
relunctance is understandable from the perspective that for decadesnow there has always been someone who said we're N years from fullblown AI, and then N years passed with nothing but narrow AI progress.Of course, someone will end up being right at some point.
Sure ... and most of the time, the narrow AI progress achieved via AI-directed funding has not even been significant, or usefulHowever, it seems to me that the degree of skepticism about AGI goes beyond what is rational. I attribute this to an unconscious reluctance on the part of most humans to conceive that **we**, the mighty and glorious human rulers of the Earth, could really be superseded by mere software programs created by mere mortal humans. Even humans who are willing to accept this theoretically, don't want to accept this pragmatically, as something that may occur in the near term.
After all, there seems to be a lot more cash around for nanotech than for AGI, and that is quite unproven technology also -- and technology that is a hell of a lot riskier and more expensive to develop than AGI software. It is not the case that investors are across the board equally skeptical of all unproven technologies -- AI seems to be viewed with an extra, and undeserved, degree of skepticism. 
For the record, at the same event, Peter Voss of Adaptive AI(
http://www.adaptiveai.com/) stated his company would have AGI in 2years. I *think* he qualified it as being at the level of a 10 yearold child. Help me out on that, if you remember.I could help you out, but I won't, because I believe Peter asked those of us at that meeting **not** to publicly discuss the details of his presentation there (although, frankly, the details were pretty scanty). If he wants to chip in some more info himself, he is welcome to...
Peter has been more successful than Novamente has at fundraising, during the last couple years. I take my hat off to him for his marketing prowess. I also note that he is a lot more experienced than me on the business marketing side ... Novamente LLC is chock full of brilliant techie futurists, but we are not sufficiently staffed in terms of marketing and sales wizardry.
I have my disagreements with Peter's approach to AGI, inasmuch as I understand it (I know the general gist of his architecture but not the nitty-gritty details). However, I don't want to get into that in detail on this list, for fear of disclosing aspects of Peter's work that he may not want disclosed. My basic issue is that I do not, based on what I know of it, see why his architecture will be capable of representing and learning complex knowledge. I am afraid his knowledge representation and learning mechanisms may be overfitted, to an extent, to early-stage infantile type learning tasks. Novamente is more complex than his system, and thus getting it to master infantile learning may be a little trickier than with his system (this is one thing we're working on now ... and of course I can't make any confident comparisons because I have never worked with Peter's system and also what I do know about it is quite out-of-date), but Novamente is designed from the start to be able to deal with complex reasoning such as mathematics and science, and so once the infantile stage is surpassed, I expect progress to be EXTREMELY rapid.
Having summarized very briefly some of my technical concerns about Peter's approach, I must add that I respect his general thinking about AI very much, and admire his enthusiasm and focus at pursuing the AGI goal. I hope his approach **does** succeed, as I think he would be a responsible and competent AGI daddy -- however, based on what I know, I do think that Novamente has far higher odds of success...
-- Ben

This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]


Re: [singularity] Defining the Singularity

2006-10-22 Thread Mark Nuzzolilo II


Well, there is funding like in the Methuselah Mouse project.  I am one of 
the 300 myself.   With enough interested people it should not be that 
hard to raise $5 million even on a very long term project.  Most of us seem 
to think that conquering aging will take longer than AGI but there are 
fairly successful funding efforts in that space.   It is a lot easier I 
imagine to find many people willing and able to donate on the order of 
$100/month indefinitely to such a cause than to find one or a few people 
to put up the entire amount.


I am sure that has already been kicked around.  Why wouldn't it work 
though?


You can't just snap your fingers and raise $5 million for a cause with even 
less public support than anti-aging research, whether you have 1 person with 
$5 million dollars, or 4,167 people with $1200 a year.  I fail to see how 
the problem would be simplified in this way.  I doubt any AGI company could, 
at this point, find thousands of people willing to give even $10/month, let 
alone $100.  But that doesn't mean that it won't be possible in a few years. 
AGI could, at any time, receive the funding and publicity that 
nanotechnology has seen especially since the late 1990s.


-
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]