In my case (http://nars.wang.googlepages.com/), that scenario won't
happen --- it is impossible for the project to fail. ;-)

Seriously, if it happens, most likely it is because the control
process is too complicated to be handled properly by the designer's
mind. Or, it is possible that the system is designed and developed as
planned, but the education process is too long and too expensive ---
just think about the same process of a human baby.

Pei

On 9/25/06, Joshua Fox <[EMAIL PROTECTED]> wrote:
I hope this question isn't too forward, but it would certainly help clarify
the possibilities for AGI.

To those doing AGI development: If, at the end of the development stage of
your project  -- say, after approximately five years -- you find that it has
failed technically to the point that it is not salvageable, what do you
think is most likely to have caused it? Let's exclude financial and
management considerations from this discussion; and let's take for granted
that a failure is just a learning opportunity for the next step.

Answers can be oriented to functionality or implementation. Some examples:
True general intelligence in some areas, but so unintelligent in others as
to be useless; Super-intelligence in principle but severely limited by
hardware capacity to the point of uselessness. But of course, I'm interested
in _your_ answers.

Thanks,

Joshua

  ________________________________
 This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe
or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to