2008/6/27 Steve Richfield <[EMAIL PROTECTED]>:
> Russell and William,
>
> OK, I think that I am finally beginning to "get it". No one here is really
> planning to do wonderful things that people can't reasonably do, though
> Russell has pointed out some improvements which I will comment on
> separately.

I still don't think you do. The general as far as I am concerned means
it can reconfigure itself to do other things it couldn't previously.
Just like a human learns differentiation. So when you ask for shopping
list of things for it to do, you will get our first steps and things
we know that can be done (because they have been done by humans)  for
testing etc...

Consider a human computer team. Now the human can code/configure the
machine to help her/him do pretty much anything. I just want to shift
that coding/configuring work to the machine. That is hard convey in
concrete examples, although I tried.

> I am interested in things that people can NOT reasonably do. Note that many
> computer programs have been written to way outperform people in specific
> tasks, and my own Dr. Eliza would seem to far exceed human capability in
> handling large amounts of qualitative knowledge that work within its
> paradigm limits. Hence, it would seem that I may have stumbled into the
> wrong group (opinions invited).

Probably so ;) The solution to a specific tasks is not within the
remit of the study of generality.  You would be like someone going up
to Turing and asking him what specific tasks ACE was going to solve.
If he said cryptography, you would go on about the Bombe cracking
engima.

> Unfortunately, no one here appears to be interested in understanding this
> landscape of solving future hyper-complex problems, but instead apparently
> everyone wishes to leave this work to some future AGI, that cannot possibly
> be constructed in the short time frame that I have in mind. Of course,
> future AGIs are doomed to fail at such efforts, just as people have failed
> for the last million years or so.
>
If Humans and AGIs are doomed to fail at the task perhaps it is impossible?

  Will Pearson


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to