I have no view on whether or not Ben Goertzel should publish the source code
of Novamente, but I see intelligence like this.

1) Intelligence is the behaviour of a system that interacts with its
environment. The system is usually a small part of the environment, but may
be co-extensive with it.

2) At present, AI research is limited by the ability to develop intelligent
software. Ben is trying to breach this limit.

3) Once this limit has been breached, by Ben or anyone else, the limiting
factor to the further development of intelligence will be the speed and
bandwidth with which a system can interact with its environment. Information
systems may interact very quickly, but mechanical systems always interact
slowly.

4) An AGI might develop to threaten human information systems far faster
than an AGI will develop to threaten our mechanical systems - including our
bodies. During this time we may socialise an AGI's virtual threat before it
becomes a mechanical threat. This is analogous to the socialisation of
children. Small children represent no mechanical threat to their parents and
can be socialised before they become such a threat.

5) I think we have plenty of time to deal with the threats that AGI might
bring. Furthermore, we must allow such threats to materialise and then deal
with them if an AGI is to be socialised to allow productive co-existence
with us. Analogously, parents have to allow teenagers to kick over the
traces if the teenagers are to become independent adults. I just hope it
doesn't hurt too much when an AGI kicks over the traces!

For my own part, I publish my theoretical position and software as soon as I
think it will be helpful to others - theory first, then software.

Take of this what you will,

James Anderson

P.S. I do not have any children - nor any AGIs for that matter.



-------
To unsubscribe, change your address, or temporarily deactivate your 
subscription, 
please go to http://v2.listbox.com/member/[EMAIL PROTECTED]

Reply via email to