On 2/22/20 1:22 AM, WriterOfMinds wrote: ... I recommend looking up the "orthogonality thesis" and doing some reading thereon. Morality, altruism, "human values," etc. are distinct from intellectual capacity, and must be intentionally incorporated into AGI if you want a complete, healthy artificial personality. -------------------------------------------------------------- I don't think the orthogonality thesis is relevant. It talks about any combination of goals and intelligence which is not what the "General" part of AGI is about. General intelligence isn't about one goal, it's more about an agent being useful in whatever circumstances it finds itself in. Doing the right thing - in it's own estimation of what the right thing is. To argue that an agent has "general" intelligence and gets hung up on a specific goal, is to say that the agent doesn't function very well "in general." The discussion is not about super intelligence (I doubt there is such thing) but about what it takes to become "more intelligent" and I contend that it takes a notion of what is "good" to be able to collect the conventional wisdom to make good choices. Since I was a kid (many years ago) intelligence was seen as a good thing. It was considered a desirable trait, and people were encouraged to cultivate it. Times change... I wouldn't say that I argue for "morality" but rather that there needs to be a method to determine what is "better." It doesn't have to be related to the Ten Commandments, just has to be a method of evaluating what is best. Simply to say that a "goal" is the way you determine what is best (e.g. does it "lead to" the goal ) is to miss the point that goals need to constantly change when circumstances change. If you get this, then you see that the challenge is to associate "what you might do" with the likely outcome. That mechanism of association is what the AGI has to incorporate and build. I'm not any kind of an expert on AGI or AI. I lurk here and have for years. I'm waiting for someone to describe an architecture, or design, to acquire the means to make good choices. What do those means look like? There are other interesting issues, but they would unfold "naturally" as one implements. enough... |
- Re: [agi] AGI questions Matt Mahoney
- Re: [agi] AGI questions immortal . discoveries
- Re: [agi] AGI questions keghnfeem
- Re: [agi] AGI questions Matt Mahoney
- Re: [agi] AGI questions Stanley Nilsen
- Re: [agi] AGI questions immortal . discoveries
- Re: [agi] AGI questions immortal . discoveries
- Re: [agi] AGI questions immortal . discoveries
- Re: [agi] AGI questions WriterOfMinds
- Re: [agi] AGI questions immortal . discoveries
- Re: [agi] AGI questions Stanley Nilsen
- Re: [agi] AGI questions James Bowery
- Re: [agi] AGI questions James Bowery
- Re: [agi] AGI questions James Bowery
- Re: [agi] AGI questions immortal . discoveries
- Re: [agi] AGI questions immortal . discoveries
- Re: [agi] AGI questions immortal . discoveries
- Re: [agi] AGI questions immortal . discoveries
- Re: [agi] AGI questions immortal . discoveries
- Re: [agi] AGI questions Alan Grimes via AGI
- Re: [agi] AGI questions immortal . discoveries