On 10/24/07, Mike Tintner <[EMAIL PROTECTED]> wrote: > > Every speculation on this board about the nature of future AGI's has been > pure fantasy. Even those which try to dress themselves up in some > semblance > of scientific reasoning. All this speculation, for example, about the > friendliness and emotions of future AGI's has been non-sense - and often > from surprisingly intelligent people. > > Why? Because until we have a machine that even begins to qualify as an AGI > - > that has the LEAST higher adaptivity - until IOW AGI's EXIST- we can't > begin > seriously to predict how they will evolve, let alone whether they will > "take > off." And until we've seen a machine that actually has functioning > emotions > and what purpose they serve, ditto we can't predict their future emotions. > > So how can you cure yourself if you have this apparently incorrigible need > to produce speculative fantasies with no scientific basis in reality > whatsoever?
Mike, It is quite rational to try to figure out , for instance, "If one were to successfully create a human-level AGI based on the Novamente design, what would its consciousness, emotions, etc. be like." Of course this is speculation given that it is not yet scientifically known whether the Novamente design will be adequate to lead to human-level AGI, but I believe it very likely will. I don't see this sort of exercise as different in character from speculating about what a lunar vehicle will behave like once it gets to Titan, even though we have never yet sent a vehicle to Titan... Presumably the difference in our attitudes is mainly that you rate the odds of the Novamente design succeeding as vanishingly low, whereas I do not. -- Ben G ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=56980237-ca2f27
