This isn't totally relevant, but have you heard of Korea's drafting of a
robot ethics charter?

On Tue, May 20, 2008 at 10:57 AM, Phillip Huggan <[EMAIL PROTECTED]>
wrote:

> Hello.  Have some projections of society's future and where AGI/AI fits in
> the mix.  The earliest benchmark where I see AGI as being potentially good
> is #13.  Fighting UFAI by developing antivirus software is already useful
> now.  Fighting UFAI by monitoring supercomputer applications may already
> make sense now, and by #11 for sure:
>
> 1) Near-term primary goal to maximize productive peron/yrs.
> 2) Rearrange capital flows to prevent productive person/yrs from being lost
> to obvious causes (ie. UN Millenium development goals and invoking
> sin-taxes), with effort to offer pride-savings win-win situations.
> Re-educate said workforce. Determine optimum resource allocation towards
> civilization redundancy efforts based upon negative externality accounting
> revised (higher) economic growth projections. Isolate states exporting
> anarchy or not attempting to participate in globalized workforce. Begin
> measuring purchasing parity adjusted annual cost to provide a Guaranteed
> Annual Income (GAI) in various nations.
> 3) Brainstomring of industries required to maximize longeivty, and to
> handle technologies and wield social systems essential for safely
> transitioning first to a medical/health, then to a leisure society.
> 4) Begin reworking bilateral and global trade agreements to reward actors
> who subsequently trend towards #3. Begin building a multilateral GAI fund to
> reward actors who initiate #5.
> 5) Mass education of society towards health/medical and other #3 sectors.
> Begin dispensing GAI to poor who are trending towards education/employment
> relevant to #3 sectors.
> 6) Conversion of non-essential workforces to health/medical R+D and other
> #3 sectors. Hopefully the education GAI load will fall and the fund can
> focus upon growing to encompass a larger GAI population base in anticipation
> of the ensuing leisure society.
> 7) Climax of medical/health R+D workforce.
> 8) Mature medical ethics needed. Mature medical AI safeguards needed.
> Education in all medical AI-relevant sectors. Begin measuring AI medical R+D
> advances vs. human researcher medical R+D advances.
> 9) Point of inflection where it becomes vastly more efficient to develop AI
> medical R+D systems rather than educating researchers (or not if something
> like real-time human trials bottleneck software R+D). Subsequent surplus
> medical/health labour-force necessitates a global GAI by now at the latest.
> AI Medical R+D systems become a critical societal infrastructure and human
> progress in the near-term will be limited by the efficacy and safety (ie.
> from computer virii) of these programs.
> 10) Leisure society begins. Diminishing returns from additional resource
> allocations towards AI medical R+D. Maximum rate of annual longevity gains.
> 11) Intensive study of mental health problems in preparation for #13.
> Brainstorming of surveillence infrastructures needed to wield engineering
> technologies as powerful as Drexler-ian nanotechnology. Living spaces will
> resemble the nested security protocols of a modern microbiology lab.
> Potentially powerful occupations and consumer goods will require increased
> surveillence. Brainstorming metrics to determine the most responsible
> handlers of a #13 technology (I suggest something like the CDI Index as a
> ranking).
> 12) Design blueprints for surveillence tools like quantum-key encryption
> and various sensors must be ready either before powerful engineering
> technologies are developed, or be among the first products created using the
> powerful technology. To maintain security for some applications it may be
> necessary to engineer entire cities from scratch. Sensors should be designed
> to maximize human privacy rights. The is a heighten risk of WWIII from this
> period on until just after the technology is developed.
> 13) A powerful engineering technology is developed (or not). The risk of
> global tyranny is highest since 1940. Civilization-wide surveillence
> achieved to ensure no WMDs unleashed, and no dangerous technological
> experiments. A technology like the ability to cheaply manufacture precision
> diamond products, could unleash many sci-fi-ish applications including
> interstellar space travel and the hardware required for recursively
> improving AI software (AGI). This technology would signal the end of
> capitalism and patent regimes. A protocol for encountering technologically
> inferior ETs might be required. Safe AGI/AI software programs would be
> needed before desired humane applications should be used. Need mature
> sciences of psychology and psychiatry to assist the benevolent
> administration of this technology. Basic Human Rights, goods and services
> should be administered to all where tyrannical regimes don't possess
> military parity.
> 14) Weaponry, surveillence, communications and spacecraft developed to
> expand the outer perimeter of surveillence beyond the Solar System. Twin
> objectives: to ensure no WMDs such as rogue AGI/AI programs, super high
> energy physics experiments, kinetic impactor meteors,etc., are created; and
> to keep open the possibility of harvesting resources required to harness the
> most powerful energy resources in the universe. The latter objective may
> require the development of physics experiments and/or AGI that conflicts
> with the former objective. The latter objective will require a GUT/TOE.
> Developing a GUT may require the construction of a physics experimental
> apparatus that should be safe to use. Need a protocol for dealing with
> malevolent ETs at approximate technological parity with humanity. Need a
> protocol to accelerate the development of dangerous technologies like AGI
> and Time Machines if the risks from these are deemed less than the threat
> from aliens; there are many game-theoric encounter scenarios to consider.
> This protocol may be anthropomorphic to how to deal with malevolent/inept
> conscious or software actors that escape the WMD surveillence perimeter.
> 16) If mapping the energy stores of the universe is itself safe/sustainable
> or if using the technologies needed to do so is safe, begin expanding a
> universe energy survey perimeter, treating those who attempt to poison
> future energy resources as pirates.
> 17) If actually harnessing massive energy resources or using the
> technologies required to do so is dangerous, a morality will need to be
> defined that determines a tradeoff of person/yrs lost vs. potential energy
> resources lost. The potential to unleash Hell Worlds, Heavens and permanent
> "in-betweens" is of prime consideration. Assuming harnessing massive energy
> resources is safe (doesn't end local universe) and holds a negligible risk
> of increasing odds of a Hell World or "in betweens", I suggest at this point
> invoking a Utilitarian system like Mark Walker's "Angelic Heirarchy",
> whereby from this point on, conscious actors begin amassing "survival
> credits". As safe energy resources dry up towards the latter part of a
> closed universe (or when atoms decay), trillions of years from now, actors
> who don't act to maximize this dwindling resource base will be killed to
> free up resources required to later mine potentially uncertain/dangerous
> massive energy resources. Same thing if the risk of unleashing Hell Worlds
> or destroying reality is deemed too high to pursue mining the energy
> resource: a finite resource base suggests those hundred trillion yr old
> actors with high survival credit totals, live closer to the end of the
> universe, as long as enforcing such a morality is itself not energy
> intensive. A Tipler-ian Time Machine may be the lever here; using it or not
> might determine net remaining harvestable energy resources and the
> quality-of-living hazard level in taking different courses of action.
> 18a) An indefinite Hell World.
> 18b) An indefinite Heaven World.
> 18c) End of the universe for conscious actors, possibly earlier than
> necessary because of a decision that fails to harness a dangerous energy
> source. If enforcing a "survial credit" administrative regime is energy
> intensive, the Moral system will be abandoned at some point and society
> might degenerate into cannabalism.
>
>
> Make www.everyclick.com your search engine and IBRA your chosen charity.
> Two cents for every search is donated to a charity that will increase
> agri-yields and mitigate Global Warming.
>
>  ------------------------------
>   *singularity* | Archives<http://www.listbox.com/member/archive/11983/=now>
> <http://www.listbox.com/member/archive/rss/11983/> | 
> Modify<http://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com/>
>
>



-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=104200892-0d3a07
Powered by Listbox: http://www.listbox.com

Reply via email to