>
> You only need emotions when you're dealing with problems that are
> problematic, ill-structured, and involving potentially infinite reasoning.
> (Chess qualifies as that for a human being, not for a program).


Those with severed connections from the amygdala (the emotional machine of
the brain) behave somewhat indifferently in any given activity, not knowing
which choice is the best. These individuals often make poor decisions, not
knowing which options to take once discovered. Emotional machinery has an
intelligence of its own and can reinforce what it means to be rational if
appropriately understood and harnessed by cognitive machinery (the
neocortex).

Emotion corresponds to goals.
> Our behavior tries to avoid bad emotions and to obtain good emotions.


Emotion may correspond to goals, but in and of itself, without a well
fashioned neocortex, the goals of emotion are short sighted.

I assume emotions already existed in very early animals.
> Animals have much less understanding than it is the case with human
> beings.
> But nevertheless they do things that are useful for their existence.
> Therefore I conclude they follow simply their emotions.


Any animal with an amygdala has emotions. It's probably safe to say that
given the ratio of amygdala size to other parts of a brain, this determines
emotional ineptitude. Those species with a neocortex have the capacity to
control emotional responses once understood cognitively in the critter. If
you where to consider cognition alone as consciousness, then animals with
less or without a neocortex could be consider less or unconscious, though
clearly animals are conscious or alive in their own ways. Of course this all
depends on contingencies an observer wishes to follow.

To do a certain science fiction:
> There will be surely a time of super human AGI.
> We will be something for super human AGI which the DNA is for us now.
> That means that our goals will be deeply coded in the structure of future
> AGI and our goals determine the behavior of AGI. The existence of the DNA
> was the key for life.
> The existence of human beings is the key for super human AGI.
> We also begin to be able to change our own DNA. Similarly, super human
> intelligence sooner or later also will be able to change its deepest goals
> and commands.
> We will not be able to ensure that AGI follows always our goals.
> But this belongs more to the singularity discussion list.


What AGI will do is a fun concept and highly debated. There's no evidence to
suggest AGI will be like us, but it is certain that an AGI will not exist
without our tinkering. I think it likely an AGI once self-aware can then
fashion a copy of itself that could then create a series of narrow AIs to
meet the endless whims of human folk. The mother AGI may either be
indifferent or go off and do exploration only interesting to itself and to
the trans(post)humans that want to be more like it, though I don't know of
any evidence that suggests modifying one's intelligence would make life at
all the more interesting nor do I see an AGI willing itself to 'want' and
explore further. With an intelligence at a certain threshold, wouldn't
things seem rather uninteresting; just more of the same? At any rate, that
is my current conclusion in the realms of what an AGI might be like and in
regards to human modified intelligence as it relates to AGI-human symbiosis.

Nathan Cravens

-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=101455710-f059c4
Powered by Listbox: http://www.listbox.com

Reply via email to