----- Original Message ----- 
From: "DEREK ZAHN" <[EMAIL PROTECTED]>
To: <agi@v2.listbox.com>
Sent: Monday, March 26, 2007 10:07 AM
Subject: [agi] AGI interests


> It would be interesting to see what basic interests and views the members
of
> this list hold.  For a few people, published works answer this pretty
> clearly but that's not true for most list members.

I think the brain, with it's huge number of neurons and connections is a
poor example to emulate to create an AGI.  I think that "when in Rome do as
the Romans do" by which I mean that computer problems implemented on
procedural CPUs require computer solutions and not human solutions.  I am
very interested, however, in knowledge gained about our brains because some
of these techniques might help in different areas of an AGI, even if not
exactly in the way it occurs in humans.  I want to create intelligence in
computers that exist today and specifically the ones I already own.  I know
for a fact, I won't be able to create intelligence with them if I tried to
duplicate the hardware in our brains (with software) so my solution lies
elsewhere.

I don't think a "small chunk" of code underlies our brain activity and I
don't think that will be true for a computer AGI either.  I could be wrong
but that is my conclusion based on the info I have seen.  I see no hint of a
magic bullet in humans or any AI projects.

I don't feel the topic of "how much CPU power is needed to create AGI" is
very helpful at all for the above reasons.  Others hold differing opinions
and even if I could, I wouldn't discourage others from debating this point.

I am very interested to see what others are planning in the way of learning
algorithms, data structures, and the nuts and bolts needed to get an AGI
going.  I have a more practical bent that many on this list but I think I do
have a plan and I am always open to debating any points made by me or by
others.  If I didn't think I could defend my positions on AGI topics, I
wouldn't hold them.

I think AGI is so difficult that it will be impossible for any of my designs
to create a singularity in any reasonable time frame, if ever.  I can say
this with confidence because I know how my software has behaved over a very
long time.  I never get my programs to approach the level I am trying to
achieve and I see no way that could happen from any of my designs.  I have
never been happily surprised that my code turned out better than I tried to
code.  Never.  I don't *believe* anything without evidence or the reasonable
likelihood that there is evidence.

-- David Clark


-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?list_id=303

Reply via email to