On Wed, Oct 10, 2007 at 01:22:26PM -0400, Richard Loosemore wrote:
>
> Am I the only one, or does anyone else agree that politics/political
> theorising is not appropriate on the AGI list?
Yes, and I'm sorry I triggred the thread.
> I particularly object to libertarianism being shoved down our
JW
-Original Message-
From: Bob Mottram <[EMAIL PROTECTED]>
Sent: Oct 11, 2007 11:12 AM
To: agi@v2.listbox.com
Subject: Re: [META] Re: Economic libertarianism [was Re: The first-to-market
effect [WAS Re: [agi] Religion-free technical content]
On 10/10/2007, Richard Loosemore <[E
A] Re: Economic libertarianism [was Re: The first-to-market
>effect [WAS Re: [agi] Religion-free technical content]
>
>On 10/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
>> Am I the only one, or does anyone else agree that politics/political
>> theorising is not app
On 10/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Am I the only one, or does anyone else agree that politics/political
> theorising is not appropriate on the AGI list?
Agreed. There are many other forums where political ideology can be debated.
-
This list is sponsored by AGIRI:
Am I the only one, or does anyone else agree that politics/political
theorising is not appropriate on the AGI list?
I particularly object to libertarianism being shoved down our throats,
not so much because I disagree with it, but because so much of the
singularity / extropian / futurist di
On Oct 10, 2007, at 2:26 AM, Robert Wensman wrote:
Yes, of course, the Really Big Fish that is democracy.
No, you got this quite wrong. The Really Big Fish is institution
responsible for governance (usually the "government"); "democracy" is
merely a fuzzy category of rule set used in gov
BillK> On 10/6/07, a wrote:
>> I am skeptical that economies follow the self-organized criticality
>> behavior. There aren't any examples. Some would cite the Great
>> Depression, but it was caused by the malinvestment created by
>> Central Banks. e.g. The Federal Reserve System. See the Austrian
>
>
>
> The only solution to this problem I ever see suggested is to
> intentionally create a Really Big Fish called the government that can
> effortlessly eat every fish in the pond but promises not to -- to
> prevent the creation of Really Big Fish. That is quite the Faustian
> bargain to protec
J. Andrew Rogers wrote:
Generally though, the point that you fail to see is that an AGI can
just as easily subvert *any* power structure, whether the environment
is a libertarian free market or an autocratic communist state. The
problem has nothing to do with the governance of the economy
On Oct 9, 2007, at 4:27 AM, Robert Wensman wrote:
This is of course just an illustration and by no means a proof that
the same thing would occur in a laissez-faire/libertarianism
economy. Libertarians commonly put blame for monopolies on
government involvement, and I guess some would object
(off topic, but there are something relevant for AGI)
My fears about economical libertarianism could be illustrated with a "fish
pond analogy". If there is a small pond with a large number of small fish of
some predatory species, after an amount of time they will cannibalize and
eat each other unt
On Sat, Oct 06, 2007 at 10:05:28AM -0400, a wrote:
> I am skeptical that economies follow the self-organized criticality
> behavior.
Oh. Well, I thought this was a basic principle, commonly cited in
microeconomics textbooks: when there's a demand, producers rush
to fill the demand. When there's
Bob Mottram wrote:
Economic libertarianism would be nice if it were to occur. However,
in practice companies and governments put in place all sorts of
anti-competitive structures to lock people into certain modes of
economic activity. I think economic activity in general is heavily
influenced b
a wrote:
Linas Vepstas wrote:
...
The issue is that there's no safety net protecting against avalanches
of unbounded size. The other issue is that its not grains of sand, its
people. My bank-account and my brains can insulate me from small
shocks.
I'd like to have protection against the bigg
Economic libertarianism would be nice if it were to occur. However,
in practice companies and governments put in place all sorts of
anti-competitive structures to lock people into certain modes of
economic activity. I think economic activity in general is heavily
influenced by cognitive biases of
On 10/6/07, a wrote:
> I am skeptical that economies follow the self-organized criticality
> behavior.
> There aren't any examples. Some would cite the Great Depression, but it
> was caused by the malinvestment created by Central Banks. e.g. The
> Federal Reserve System. See the Austrian Business C
Linas Vepstas wrote:
My objection to economic libertarianism is its lack of discussion of
"self-organized criticality". A common example of self-organized
criticality is a sand-pile at the critical point. Adding one grain
of sand can trigger an avalanche, which can be small, or maybe
(unbounde
OK, this is very off-topic. Sorry.
On Fri, Oct 05, 2007 at 06:36:34PM -0400, a wrote:
> Linas Vepstas wrote:
> >For the most part, modern western culture espouses and hews to
> >physical non-violence. However, modern right-leaning "pure" capitalism
> >advocates not only social Darwinism, but also
Linas Vepstas wrote:
On Thu, Oct 04, 2007 at 07:49:20AM -0400, Richard Loosemore wrote:
As to exactly how, I don't know, but since the AGI is, by assumption,
peaceful, friendly and non-violent, it will do it in a peaceful,
friendly and non-violent manner.
I like to think of myself as
On Thu, Oct 04, 2007 at 07:49:20AM -0400, Richard Loosemore wrote:
>
> As to exactly how, I don't know, but since the AGI is, by assumption,
> peaceful, friendly and non-violent, it will do it in a peaceful,
> friendly and non-violent manner.
I like to think of myself as "peaceful and non-viole
On 10/4/07, J Storrs Hall, PhD <[EMAIL PROTECTED]> wrote:
> We can't build a system that learns as fast as a 1-year-old just now. Which is
> our most likely next step: (a) A system that does learn like a 1-year-old, or
> (b) a system that can learn 1000 times as fast as an adult?
>
> Following Moor
On Thursday 04 October 2007 11:50:21 am, Bob Mottram wrote:
> To me this seems like elevating that status of nanotech to magic.
> Even given RSI and the ability of the AGI to manufacture new computing
> resources it doesn't seem clear to me how this would enable it to
> prevent other AGIs from also
On 10/4/07, Bob Mottram <[EMAIL PROTECTED]> wrote:
> To me this seems like elevating that status of nanotech to magic.
> Even given RSI and the ability of the AGI to manufacture new computing
> resources it doesn't seem clear to me how this would enable it to
> prevent other AGIs from also reaching
To me this seems like elevating that status of nanotech to magic.
Even given RSI and the ability of the AGI to manufacture new computing
resources it doesn't seem clear to me how this would enable it to
prevent other AGIs from also reaching RSI capability. Presumably
"lesser techniques" means blac
Bob Mottram wrote:
On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
As to exactly how, I don't know, but since the AGI is, by assumption,
peaceful, friendly and non-violent, it will do it in a peaceful,
friendly and non-violent manner.
This seems very vague. I would suggest that if
On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> As to exactly how, I don't know, but since the AGI is, by assumption,
> peaceful, friendly and non-violent, it will do it in a peaceful,
> friendly and non-violent manner.
This seems very vague. I would suggest that if there is no clea
Bob Mottram wrote:
On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
Linas Vepstas wrote:
Um, why, exactly, are you assuming that the first one will be freindly?
The desire for self-preservation, by e.g. rooting out and exterminating
all (potentially unfreindly) competing AGI, would n
On 04/10/2007, Richard Loosemore <[EMAIL PROTECTED]> wrote:
> Linas Vepstas wrote:
> > Um, why, exactly, are you assuming that the first one will be freindly?
> > The desire for self-preservation, by e.g. rooting out and exterminating
> > all (potentially unfreindly) competing AGI, would not be wha
Linas Vepstas wrote:
On Tue, Oct 02, 2007 at 01:20:54PM -0400, Richard Loosemore wrote:
When the first AGI is built, its first actions will be to make sure that
nobody is trying to build a dangerous, unfriendly AGI.
Yes, OK, granted, self-preservation is a reasonable character trait.
After
29 matches
Mail list logo