--- Richard Loosemore <[EMAIL PROTECTED]> wrote:
> This is nonsense: the result of giving way to science fiction fantasies
> instead of thinking through the ACTUAL course of events. If the first
> one is benign, the scenario below will be impossible, and if the first
> one is not benign, the
On 10/22/07, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> My own opinion is that the first AGI systems to be built will have
> extremely passive, quiet, peaceful "egos" that feel great empathy for
> the needs and aspirations of the human species.
Sounds rather optimistic, that creating great emp
candice schuster wrote:
I think you are very right...why build something that in turn could lead
to our distruction, not that we aren't on the downward spiral anyhow.
We need to perhaps ponder on the thought...why in the first place ? We
should be gaining super intelligence on an individual l
This is nonsense: the result of giving way to science fiction fantasies
instead of thinking through the ACTUAL course of events. If the first
one is benign, the scenario below will be impossible, and if the first
one is not benign, the scenario below will be incredibly unlikely.
Over and o
On 10/22/07, Charles D Hixson <[EMAIL PROTECTED]> wrote:
> Aleksei Riikonen wrote:
>> I think most of us would prefer potentially superhuman
>> systems to not have goals/etc of their own.
>
> To me this sounds like wishing for a square circle. What we really want
> is that the goals, etc. of the A
I think you are very right...why build something that in turn could lead to our
distruction, not that we aren't on the downward spiral anyhow. We need to
perhaps ponder on the thought...why in the first place ? We should be gaining
super intelligence on an individual level, this is not hard t
...but the "singularity" advanced by Kurzweil includes the integration
of human brains with digital computation...or computers
(http://www.ece.ubc.ca/~garyb/BCI.htm , http://wtec.org/bci/). Since
war is the pampered offspring of the technosphere...it is highly likely
that we can expect to see rela
albert medina wrote:
Dear Sirs,
I have a question to ask and I am not sure that I am sending it to the
right email address. Please correct me if I have made a mistake. From
the outset, please forgive my ignorance of this fascinating topic.
All sentient creatures have a sense of self, abou
Aleksei Riikonen wrote:
On 10/22/07, albert medina <[EMAIL PROTECTED]> wrote:
My question is: AGI, as I perceive your explanation of it, is when a
computer gains/develops an ego and begins to consciously plot its own
existence and make its own decisions.
That would be one form of AGI,
Dear Aleksei,
I am in the beginning stages of my research into "your world" and I thank you
for your reply.
Are we still restricted to the binary code when we speak of the computer?
Sentient entities, especially human, are subject to lots of analog functions. .
.take for example t
On 10/22/07, Matt Mahoney <[EMAIL PROTECTED]> wrote:
> Evolution favors animals with good learning algorithms. In humans we
> associate these properties with consciousness and free will. These beliefs
> are instinctive. You cannot reason logically about them. In particular, you
> cannot ask if
--- albert medina <[EMAIL PROTECTED]> wrote:
> All sentient creatures have a sense of self, about which all else
> revolves. Call it "egocentric singularity" or "selfhood" or "identity".
> The most evolved "ego" that we can perceive is in the human species. As far
> as I know, we are the only
On 10/22/07, albert medina <[EMAIL PROTECTED]> wrote:
> My question is: AGI, as I perceive your explanation of it, is when a
> computer gains/develops an ego and begins to consciously plot its own
> existence and make its own decisions.
That would be one form of AGI, but it should also be possibl
13 matches
Mail list logo