On Feb 17, 2008 6:32 PM, Ben Goertzel <[EMAIL PROTECTED]> wrote: > I don't assume that all successful AGI's must be humanlike...
Neither do I - on the contrary, I think a humanlike AGI isn't going to happen, in the same way that we never did achieve birdlike flight. But the only reason we have for believing ill-posed problems (i.e. nearly all the problems presented by the real world) to be solvable at all is that humans (in some cases) provide an existence proof. Where a problem is ill-posed, and humans can't come close to solving it, and we can't point to a specific human limit that would enable us to solve it if overcome, then the reasonable default conclusion is that it's not solvable. > Google is not an AGI, so I have no idea why you think this proves > anything about AGI ... It doesn't. It does, however, prove something about the contents of the Web, and constitutes a reason... > I strongly suspect there is enough information in the > text online for an AGI to learn that water flows downhill in most > circumstances, without having explicit grounding... ...for disagreeing with you on this point. ------------------------------------------- agi Archives: http://www.listbox.com/member/archive/303/=now RSS Feed: http://www.listbox.com/member/archive/rss/303/ Modify Your Subscription: http://www.listbox.com/member/?member_id=8660244&id_secret=95818715-a78a9b Powered by Listbox: http://www.listbox.com