What worries me is that the founder of this company subscribes to the
philosophy of Objectivism, and the implications this might have for the
company's possibility at achieving friendly AI. I do not know about the rest
of their team, but some of them use the word "rational" a lot, which could
be a hint.



I am well aware of that Ayn Rand, the founder of Objectivism, uses slightly
non-standard meaning when using words like "selfishness" and "altruism", but
her main point is that altruism is the source of all evil in the world, and
selfishness ought to be the main virtue of all mankind. Instead of altruism
she often also uses the word "selflessness" which better explains her
seemingly odd position. What she essentially means is that all evil of the
world stems from people who "give up their values, and their self" and
thereby become mindless evildoers that respect others as little as they
respect themselves. While this psychological statement in isolation could be
worth noting, and might help understand some collective madness, especially
from the last century, I still feel her philosophy is dangerous because she
mixes up her very specific concept of "selflessness" with the
commonly understood concept of altruism, in the sense of valuing the well
being and happiness of others. Is this mix-up accidental or intended? In her
novel The Fountainhead you even get the impression that she doesn't think it
is possible to combine altruism with creativity and originality, as all
"altruistic" characters of her book are incompetent copycats who just
imitate others.



Her view of the world also seems to completely ignore another category of
potential evil-doers: Selfish people who just do not see any problem with
using whatever means they see fit, including violence, to achieve their
goals. People who just do not see there is "any problem" in killing or
torturing others. Why does she ignore this group of people, because she does
not think they exist?



My personal opinion is that Objectivism is a case of what could be called
"the werewolf fallacy". For example, I could make a case for the following
philosophy: "Werewolves as described in literature would be bad for
humanity, and if we encounter werewolves, we should try to fight them with
whatever means we see fit!". This statement is in itself completely true and
coherent, and I would be possible to write books on the subject that could
seem to make sense. The only problem is of course that there are no
werewolves, and there are other much more important things to do than to go
around preparing to fight werewolves! Similarly I do not think that all
these "selfless people" who Ayn Rand describe exist in any large numbers, or
at least they are certainly not the main source of evil in the world.



How Objectivism could feel like "home" I cannot understand personally. If a
person is less capable of understanding other people, I guess it could make
some sense. I guess social life could be hard for such a person; they would
often hurt other people by mistake, make others annoyed or angry and
frequently bring enemies upon themselves. Ayn Rands gives to them a very
comfortable answer namely that it is ok, even virtuous, to not understand
others as long as you are not physically aggressive. An agenda for peaceful
psychopathy if you like. So far so good, I don't expect everyone to be
empathetic, and to motivate the need for respect rationally by the benefits
of cooperation seems like a reasonable trade of. But Ayn Rand goes a step
too far when she outright attacks altruism and people who value the well
being of others! She definitely crosses a line there!



As a general intelligent theoretician I would also say Ayn Rands notion of
"selflessness" is outright bizarre if interpreted literally. An intelligent
being cannot choose to "give up its values", since all its choices are
already based upon them. Her conclusions are therefore confusing.



So because this philosophy is controversial, it raises some interesting
questions about Adaptive AI's plans for friendly AI. *What values
an objectivist would give to an AGI seems like a complete paradox to me?* Would
he make an AGI that is only obedient to its master and creator, or would he
make an AGI system that to only cares about protecting and sustaining the
life of itself? But in the first case, the AGI would truly become a
selfless, and therefore evil soul in Ayn Rands very meaning, an evil soul
that is also super intelligent.



On the other hand I cannot understand what selfish interest the objectivist
AGI designer could find in creating a selfish super intelligent AGI system
that would likely become a superior competitor? Maybe such an AGI system
would decide, much like the fictionous Skynet, that the humans is the most
imminent threat to its survival, and make us its enemy?



I bet a strong enough AGI system could kill us even without the use
of offensive violence in the sense Ayn Rand uses the word. I guess it just
needs to obtain exclusive legal ownership on all the land that we need to
live on, on all the food we need to eat, and on all the air we need to
breathe. Then it could just kill us in self-defence because we trespass on
its property. I know even Ayn Rand sees no moral problem in using defensive
violence to defend material property that is being stolen.



Well, let me just say that I would be concerned if someone creates a selfish
super intelligent AGI system that does not value the well being of me and
the rest of us humans, except for when it can see benefits for its own
survival. Out of fear for my own life, and the life of my descendants, I
would not support your AGI initiative! Even a sentimental and altruistic
person like me has that much sense of self-defence! :-)* *



That said, I think Adaptive AI's definition of general intelligence seems
pretty reasonable, and their plans for development seems well thought out. I
also found some thoughts on evolution and AGI noteworthy. But my feelings
are mixed about their strength in numbers and the hopes for progress it
gives. To me altruistic AGI just seems a lot safer than selfish AGI!



/Robert Wensman

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=231415&id_secret=25439434-9f2310

Reply via email to