While SIAI fills that niche somewhat, it concentrates on the
Intelligence explosion scenario. Is there a sufficient group of
researchers/thinkers with a shared vision of the future of AI coherent
enough to form an organisation? This organisation would discus,
explore and disseminate what can be done to make the introduction as
painless as possible.

The base beliefs shared between the group would be something like

 - The entities will not have goals/motivations inherent to their
form. That is robots aren't likely to band together to fight humans,
or try to take over the world for their own means.  These would have
to be programmed into them, as evolution has programmed group loyalty
and selfishness into humans.
- The entities will not be capable of fully wrap around recursive
self-improvement. They will improve in fits and starts in a wider
economy/ecology like most developments in the world *
- The goals and motivations of the entities that we will likely see in
the real world will be shaped over the long term by the forces in the
world, e.g. evolutionary, economic and physics.

Basically an organisation trying to prepare for a world where AIs
aren't sufficiently advanced technology or magic genies, but still
dangerous and a potentially destabilising world change. Could a
coherent message be articulated by the subset of the people that agree
with these points. Or are we all still too fractured?

  Will Pearson

* I will attempt to give an inside view of why I take this view, at a
later date.


-------------------------------------------
agi
Archives: http://www.listbox.com/member/archive/303/=now
RSS Feed: http://www.listbox.com/member/archive/rss/303/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=8660244&id_secret=106510220-47b225
Powered by Listbox: http://www.listbox.com

Reply via email to