Wouldn't you say the objectivity is limiting choice by limiting perception?

Pat McKown

On 8/5/07, Samantha Atkins <[EMAIL PROTECTED]> wrote:
> On 7/26/07, Robert Wensman <[EMAIL PROTECTED]> wrote:
> >
> >  What worries me is that the founder of this company subscribes to the
> > philosophy of Objectivism, and the implications this might have for the
> > company's possibility at achieving friendly AI. I do not know about the
> rest
> > of their team, but some of them use the word "rational" a lot, which could
> > be a hint.
> >
>
>
> You do not wish the AGI to be rational?  :-)  Seriously, if you knew Peter
> even lightly you would know he is in no way  of the ilk of the worst of
> those who may call themselves "objectivist".   He is imminently sensible,
> ethical and committed.  Your remark also imho displays a very shallow notion
> of objectivist philosophy .
>
> I am well aware of that Ayn Rand, the founder of Objectivism, uses slightly
> > non-standard meaning when using words like "selfishness" and "altruism",
> but
> > her main point is that altruism is the source of all evil in the world,
> and
> > selfishness ought to be the main virtue of all mankind. Instead of
> altruism
> > she often also uses the word "selflessness" which better explains her
> > seemingly odd position. What she essentially means is that all evil of the
> > world stems from people who "give up their values, and their self" and
> > thereby become mindless evildoers that respect others as little as they
> > respect themselves. While this psychological statement in isolation could
> be
> > worth noting, and might help understand some collective madness,
> especially
> > from the last century, I still feel her philosophy is dangerous because
> she
> > mixes up her very specific concept of "selflessness" with the
> > commonly understood concept of altruism, in the sense of valuing the well
> > being and happiness of others. Is this mix-up accidental or intended? In
> her
> > novel The Fountainhead you even get the impression that she doesn't think
> it
> > is possible to combine altruism with creativity and originality, as all
> > "altruistic" characters of her book are incompetent copycats who just
> > imitate others.
> >
>
> If you had actually read her works on this subject, especially in this case
> "The Virtue of Selfishness" I think you would have no problem like the
> above.
>
>
>
> Her view of the world also seems to completely ignore another category of
> > potential evil-doers: Selfish people who just do not see any problem with
> > using whatever means they see fit, including violence, to achieve their
> > goals. People who just do not see there is "any problem" in killing or
> > torturing others. Why does she ignore this group of people, because she
> does
> > not think they exist?
> >
>
> OK.   You obviously have no real knowledge of objectivism.
>
>
>
> So because this philosophy is controversial, it raises some interesting
> > questions about Adaptive AI's plans for friendly AI. *What values
> > an objectivist would give to an AGI seems like a complete paradox to me? *
> Would
> > he make an AGI that is only obedient to its master and creator, or would
> he
> > make an AGI system that to only cares about protecting and sustaining the
> > life of itself? But in the first case, the AGI would truly become a
> > selfless, and therefore evil soul in Ayn Rands very meaning, an evil soul
> > that is also super intelligent.
> >
>
>
> If you actually understood  objectivism you would undestand that reason,
> intelligence and ability are  seen as virtues and real objectivists deeply
> desire to see their increase regardless of whether that manifestation is in
> themselves or others.  It is not remotely about being King of the Hill or
> some such nonsense.  It is not at all clear whether a real AGI would be
> selfless.   You are btw mistaken that selflessness per se is the essence of
> evil in objectivism.
>
>
> On the other hand I cannot understand what selfish interest the objectivist
> > AGI designer could find in creating a selfish super intelligent AGI system
> > that would likely become a superior competitor? Maybe such an AGI system
> > would decide, much like the fictionous Skynet, that the humans is the most
> > imminent threat to its survival, and make us its enemy?
> >
>
> Objectivists welcome superior ability as all profit from greater
> intelligence and productive ability in the world.  That an AGI may turn
> against us is as much of a concern for an objectivist as anyone else.
>
>
>
>
> I bet a strong enough AGI system could kill us even without the use
> > of offensive violence in the sense Ayn Rand uses the word. I guess it just
> > needs to obtain exclusive legal ownership on all the land that we need to
> > live on, on all the food we need to eat, and on all the air we need to
> > breathe. Then it could just kill us in self-defence because we trespass on
> > its property. I know even Ayn Rand sees no moral problem in using
> defensive
> > violence to defend material property that is being stolen.
> >
>
> Again, you do not know what you are talking about.
>
>
> - samantha
>
> -----
> This list is sponsored by AGIRI: http://www.agiri.org/email
> To unsubscribe or change your options, please go to:
> http://v2.listbox.com/member/?&;

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&id_secret=28732320-36ff57

Reply via email to