On Tuesday 20 May 2008, Phillip Huggan wrote:
> Hello.  Have some projections of society's future and where AGI/AI
> fits in the mix.  The earliest benchmark where I see AGI as being
> potentially good is #13.  Fighting UFAI by developing antivirus
> software is already useful now.  Fighting UFAI by monitoring
> supercomputer applications may already make sense now, and by #11 for
> sure:
>
>   1) Near-term primary goal to maximize productive peron/yrs.

http://heybryan.org/2008-05-13_hyperfocusing.html
http://heybryan.org/recursion.html
http://heybryan.org/mediawiki/index.php/Sustained_attention
etc. We're working on it.

>   2) Rearrange capital flows to prevent productive person/yrs from
> being lost to obvious causes (ie. UN Millenium development goals and
> invoking sin-taxes), with effort to offer pride-savings win-win

Huh? Capital flows? I hope you only mean resources, not actually money.

> situations. Re-educate said workforce. Determine optimum resource

The internet can do this re-education.

> allocation towards civilization redundancy efforts based upon

You mean, von Neumann probes, space habitats?
http://openvirgle.net/
http://groups.google.com/group/vcnprize/

> negative externality accounting revised (higher) economic growth
> projections. Isolate states exporting anarchy or not attempting to
> participate in globalized workforce. Begin measuring purchasing

Uh, what does economics have to do with the future of mankind? And 
what's with this against 'anarchy'? Ignoring the fact that people can 
do what they want, when they want, is a bad contigency plan.

> parity adjusted annual cost to provide a Guaranteed Annual Income
> (GAI) in various nations. 3) Brainstomring of industries required to
> maximize longeivty, and to handle technologies and wield social
> systems essential for safely transitioning first to a medical/health,
> then to a leisure society. 4) Begin reworking bilateral and global

Why do the industries need to transition? Can't we (people) pick up the 
slack? Personal fabrication, etc.

> trade agreements to reward actors who subsequently trend towards #3.
> Begin building a multilateral GAI fund to reward actors who initiate
> #5. 5) Mass education of society towards health/medical and other #3
> sectors. Begin dispensing GAI to poor who are trending towards
> education/employment relevant to #3 sectors. 6) Conversion of
> non-essential workforces to health/medical R+D and other #3 sectors.
> Hopefully the education GAI load will fall and the fund can focus
> upon growing to encompass a larger GAI population base in
> anticipation of the ensuing leisure society. 7) Climax of
> medical/health R+D workforce.

"non-essential workforces" ? Workforces don't matter too much in the 
case where manufacturing is cheap, automated, sustaining. So maybe not 
even leisure society, but just post-scarcity society.
http://heybryan.org/exp.html

>   8) Mature medical ethics needed. Mature medical AI safeguards
> needed. Education in all medical AI-relevant sectors. Begin measuring
> AI medical R+D advances vs. human researcher medical R+D advances. 9)

Ethics? Suppose FAI doesn't work. How does this ethics work out for you 
now ?

> just after the technology is developed. 13) A powerful engineering
> technology is developed (or not). The risk of global tyranny is
> highest since 1940. Civilization-wide surveillence achieved to ensure

Nah, global-tyranny isn't high since if the tech is out there, it can be 
used in other ways too. Countermeasures and so on, see:
http://heybryan.org/transhumanism_def.html

> no WMDs unleashed, and no dangerous technological experiments. A
> technology like the ability to cheaply manufacture precision diamond
> products, could unleash many sci-fi-ish applications including
> interstellar space travel and the hardware required for recursively
> improving AI software (AGI). This technology would signal the end of
> capitalism and patent regimes. A protocol for encountering

The signal for the end of patent regimes has already been present, 
arguably. Also, the hardware for recursion can come much sooner:
http://heybryan.org/exp.html

> benevolent administration of this technology. Basic Human Rights,
> goods and services should be administered to all where tyrannical
> regimes don't possess military parity. 14) Weaponry, surveillence,

Huh? Tyranny? Just let the people have self-replicating machines so that 
they can help themselves. There's no way that top-down policy can work 
like this, unless you want a 'Global Ai' thing going on, but those 
systems don't sound very interesting since they can just as easily be 
ignored and escaped i.e. via space travel, space habitat forking, etc.

> communications and spacecraft developed to expand the outer perimeter
> of surveillence beyond the Solar System. Twin objectives: to ensure
> no WMDs such as rogue AGI/AI programs, super high energy physics
> experiments, kinetic impactor meteors,etc., are created; and to keep
> open the possibility of harvesting resources required to harness the
> most powerful energy resources in the universe. The latter objective

Eh. I don't like your lockdown scenario. Let's instead facilitate people 
traveling to other star systems and let *them* do their own lockdown 
procedures. In this scenario, where we have earth - as a seed for 
humanity - and therefore we need to recognize our stance as 
facilitators, not so much about locking down the entire system from 
experimentation and so on.

> software actors that escape the WMD surveillence perimeter. 16) If
> mapping the energy stores of the universe is itself safe/sustainable
> or if using the technologies needed to do so is safe, begin expanding
> a universe energy survey perimeter, treating those who attempt to
> poison future energy resources as pirates. 17) If actually harnessing

The forntiersmen always look like pirates, don't they?

> massive energy resources or using the technologies required to do so
> is dangerous, a morality will need to be defined that determines a
> tradeoff of person/yrs lost vs. potential energy resources lost. The
> potential to unleash Hell Worlds, Heavens and permanent "in-betweens"
> is of prime consideration. Assuming harnessing massive energy
> resources is safe (doesn't end local universe) and holds a negligible
> risk of increasing odds of a Hell World or "in betweens", I suggest
> at this point invoking a Utilitarian system like Mark Walker's
> "Angelic Heirarchy", whereby from this point on, conscious actors
> begin amassing "survival credits". As safe energy resources dry up

Uh? Survival credits? Why wouldn't they just transfer out to the 
perimeters-of-development, the bleeding edge as it's called, who says 
that they must be forced to use these credit systems?

> towards the latter part of a closed universe (or when atoms decay),
> trillions of years from now, actors who don't act to maximize this
> dwindling resource base will be killed to free up resources required

Who will do this killing ? 

> to later mine potentially uncertain/dangerous massive energy
> resources. Same thing if the risk of unleashing Hell Worlds or
> destroying reality is deemed too high to pursue mining the energy
> resource: a finite resource base suggests those hundred trillion yr
> old actors with high survival credit totals, live closer to the end
> of the universe, as long as enforcing such a morality is itself not
> energy intensive. A Tipler-ian Time Machine may be the lever here;
> using it or not might determine net remaining harvestable energy
> resources and the quality-of-living hazard level in taking different
> courses of action. 18a) An indefinite Hell World.
>   18b) An indefinite Heaven World.
>   18c) End of the universe for conscious actors, possibly earlier
> than necessary because of a decision that fails to harness a
> dangerous energy source. If enforcing a "survial credit"
> administrative regime is energy intensive, the Moral system will be
> abandoned at some point and society might degenerate into
> cannabalism.

That's one scenario. But I don't get why it sounds so centralized. The 
universe is mostly highly parallel, more than anything else. There are 
a trillion stars burning intensely *this moment*. Not because we have 
some centralized way of dictating morality or what is to be a 'heaven 
simulation' and what is to be a 'hell simulation'. 

- Bryan
________________________________________
http://heybryan.org/


-------------------------------------------
singularity
Archives: http://www.listbox.com/member/archive/11983/=now
RSS Feed: http://www.listbox.com/member/archive/rss/11983/
Modify Your Subscription: 
http://www.listbox.com/member/?member_id=4007604&id_secret=104200892-0d3a07
Powered by Listbox: http://www.listbox.com

Reply via email to