Joshua Fox wrote:
It is not at all sensible. Today we have no real idea how to build a working AGI.

Right. The Friendly AI work is aimed at a future system. Fermi and
company planned against meltdown _before_ they let their reactor go
critical.

...spontaneously ...
People are working on an AGI that can do things spontaneously.  It
does not yet exist.

...concept extraction and learning ... algorithms and ...come understand software and hardware in depth ...and develop a will to be better greater than all other
If these are the best ways to achieve its goal, and if it is _truly_
intelligent, then of course that is what it would do. How long it
takes researchers to create such an AGI or whether they manage to help
it avoid the dangers I mention is another question.

By the way, the standard example of seemingly harmless but potentially
deadly AGI goal is making paper-clips. I mentioned theorem proving for
variety, although the difference between goals that do and don't
affect the material world might be worth some thought.

Joshua,

Interesting: it looks like you are deliberately ignoring my comments on what you have been writing.

Well, you are welcome to live in your own little fantasy land, but that does not change the you are repeatedly stating complete falsehoods about this topic.


Richard Loosemore.

-----
This list is sponsored by AGIRI: http://www.agiri.org/email
To unsubscribe or change your options, please go to:
http://v2.listbox.com/member/?member_id=4007604&user_secret=7d7fb4d8

Reply via email to